Prosecution Insights
Last updated: April 19, 2026
Application No. 17/482,096

SEMANTIC LANE DESCRIPTION

Non-Final OA §101§103
Filed
Sep 22, 2021
Examiner
JAGOLINZER, SCOTT ROSS
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mobileye Vision Technologies Ltd.
OA Round
4 (Non-Final)
41%
Grant Probability
Moderate
4-5
OA Rounds
3y 6m
To Grant
60%
With Interview

Examiner Intelligence

Grants 41% of resolved cases
41%
Career Allow Rate
45 granted / 110 resolved
-11.1% vs TC avg
Strong +19% interview lift
Without
With
+19.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
43 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
11.6%
-28.4% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 110 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/24/2025 has been entered. Status of Claims This action is in reply to the RCE filed on 12/24/2025. Claims 38-45, 49, 56-61, 63-84, and 86-89 are currently pending and have been examined. Claims 38, 74, and 79 are amended. Claim 85 is cancelled. Claims 38-45, 49, 56-61, 63-84, and 86-89 are currently rejected. This action is made NON-FINAL. Response to Arguments Applicant’s arguments filed 12/24/2025 have been fully considered but they are not persuasive. Applicant’s arguments regarding the 101 rejections are not persuasive. Applicant argues that “the amended claims are not an abstract idea under step 2A”. The claimed subject matter of claim 38 revolves around analyzing an image of a road that contains at least 2 lanes, analyzing the image to identify an attribute of the second lane that indicates a lane use restriction, determining based on the attribute a first and second characterization based on the distance from the vehicle, and identify based on comparing those two characterizations that a change in the lane use restriction of the second lane. Each of these limitations are steps that a driver of a vehicle is capable of making. A driver can view the road ahead and determine an attribute of a lane (such as and HOV or bus lane symbol). A driver can then look along the road and notice that the lane changes from having a double solid line demarking a restricted use lane closer to the vehicle and a normal dashed line further ahead and determine based on that change that the HOV/bus lane is ending and becoming a regular lane. The additional claimed limitations simply recite generic computer components and the pre-solution activity of gather image data from a sensor and a post-solution activity of transmitting the determination to an external server for further processing. Therefore step 2A clearly demonstrates that the claims recite a mental process and that the claims no not recite any additional elements that integrate the judicial exception into a practical application. An example of a practical application would be adjusting the lane of the vehicle based on the determination which incorporated the practical applications of control of a vehicle. See MPEP 2106.04(d) “The courts have also identified limitations that did not integrate a judicial exception into a practical application: Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP §2106.05(f); Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g);”. Applicant additionally argues that the claims integrates the abstract idea into the practical applications of “it can be important for autonomous vehicles to determine navigational actions based on contextual information, such as lane use restrictions for a particular lane” and “It may further be beneficial for this information to be stored in a navigational map to enable a vehicle to make these determinations more quickly”. Both of these “practical applications” are not claimed and therefore not persuasive. The final limitation of send[ing] the information indicative of the change in lane use restriction of the at lease on additional lane of travel to a server for use in updating a road navigation model is merely reciting a post-solution activity of transmitting data to a server and then reciting an intended use for that data but is not actively claiming the process of updating the navigation model or how the received data is used in performing that task which shows it is ancillary to the inventive concept of making the determination of change in lane use restriction. Applicant additionally argues under step 2B that the claims recite significantly more. Applicant references the BASCOM ruling in arguing that the ordered combination of the limitations are not well-known. The applicant appears to apply this rationale to the claims as a whole however this determination of what is well known in regards to step 2B is in regards to the “additional elements” outside the mental process such as the computing system required to perform the mental process is not well-known. Therefore applicant’s assertion that the ordering of the claims as a whole including the mental process is not a correct application of step 2B analysis. In the instant claims the additional elements are a navigation system, a processor, a memory, a camera, a trained model, and a server. Each of those elements are not being claimed to be used in any different or special configuration and are simply operating as such devices do in any application and therefore their inclusion is well-understood, routine, and conventional and do not amount to significantly more. Therefore the 101 rejections are being maintained. Applicant appears to argue the instant version of the claims which are addressed in the updated rejections below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 38-45, 49, 56-61, 63-84, and 86-89 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 38-45, 49, 56-61, 63-84, and 86-89 are directed to a system, method, or product, which are/is one of the statutory categories of invention. (Step 1: YES) The examiner has identified independent system/method/product Claim 38 as the claim that represents the claimed invention for analysis and is similar to independent product Claim 74 and method Claim 79. Claim 38 recites the limitations of: A navigation system for a host vehicle, the system comprising: at least one processor comprising circuitry and a memory, wherein the memory includes instructions that when executed by the circuitry cause the at least one processor to: receive at least one image captured by a camera from an environment of the host vehicle; analyze the at least one image to identity a representation of a lane of travel of the host vehicle along a road segment and a representation of at least one additional lane of travel along the road segment; analyze the at least one image to identify an attribute associated with the at least one additional lane of travel, wherein the attribute is indicative of a lane use restriction for the at least one additional lane of travel; determine, based on the attribute, information indicative of a first characterization of the at least one additional lane of travel associated with the lane use restriction at a first distance relative to the host vehicle and a second characterization of the at least one additional lane of travel associated with the lane use restriction at a second distance relative to the vehicle, the first and second characterizations being output by a trained model; identify, based on a comparison of the first characterization with the second characterization, a change in lane use restriction of the at least one additional lane of travel; and send the information indicative of the change in lane use restriction of the at least one additional lane of travel to a server for use in updating a road navigation model. These limitations, under their broadest reasonable interpretation, cover performance of the limitation as mental processes. Data analysis recites concepts performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as a concept performed in the human mind, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The navigation system comprising a processor and memory in Claim 1 is just applying generic computer components to the recited abstract limitations. The recitation of generic computer components in a claim does not necessarily preclude that claim from reciting an abstract idea. Claims 74 & 79 are also abstract for similar reasons. (Step 2A-Prong 1: YES. The claims recite an abstract idea.) This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of: A navigation system for a host vehicle, the system comprising: at least one processor comprising circuitry and a memory, wherein the memory includes instructions that when executed by the circuitry. The computer hardware/software is/are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than instructions to apply the exception using a generic computer component. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea without a practical application because they do not impose any meaningful limits on practicing the abstract idea and are at a high level of generality. Therefore, claims 38, 74, and 79 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application.) The claims do not include additional elements that are sufficient to amount to significantly more that the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer hardware amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. In the instant application: receive at least one image captured by a camera from an environment of the host vehicle; and send the information indicative of the change in lane use restriction of the at least one additional lane of travel to a server for use in updating a road navigation model. These limitations are insignificant extra-solution activity that do not incorporate the mental process into a practical application. Accordingly, these additional elements, do not change the outcome of the analysis, when considered separately and as an ordered combination. Thus, claims 38, 74, and 79 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more.) Dependent claims further define the abstract idea that is present in their respective independent claims 38, 74, and 79 and thus correspond to Mental Processes and hence are abstract for the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the claims 38-45, 49, 56-61, 63-84, and 86-89 are not patent-eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 38-41, 43-45, 49, 56, 63, 66-67, 73-74, 77-79, 82-83, and 86-89 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takama et. al. (US 2019/0225265), herein Takama in view of Mizoguchi (US 2020/0180639), herein Mizoguchi and Nishibashi et. al. (US 2013/0103304), herein Nishibashi. Regarding claim 38: Takama teaches: A navigation system for a host vehicle (an autonomous navigation system of the subject vehicle 1 [0026]), the system comprising: at least one processor comprising circuitry and a memory (The ECU 100 is a processor which includes a central processing unit (CPU) 102 and a memory 104 [0015]), wherein the memory includes instructions that when executed by the circuitry (The memory 104 has stored thereon instructions which program the CPU 102 to perform a variety of tasks as will be described later [0015]) cause the at least one processor to: receive at least one image captured by a camera (the road sensor 112 is implemented as an optical camera which captures an optical signal of the road on which the subject vehicle 1 is travelling [0023]) from an environment of the host vehicle (The road sensor 112 outputs the optical signal of the road to the ECU 100. The road sensor 112 may be implemented as a front camera that acquires an optical signal of the road in front of the subject vehicle 1, or a surround view camera that acquires an optical signal of the road around the subject vehicle 1. In the case of a surround view camera, the road in front of the subject vehicle 1 is still captured. [0023]); analyze the at least one image to identity (if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. [0035]) a representation of a lane of travel of the host vehicle along a road segment (The road sensor 112 may be implemented as a front camera that acquires an optical signal of the road in front of the subject vehicle 1, or a surround view camera that acquires an optical signal of the road around the subject vehicle 1. In the case of a surround view camera, the road in front of the subject vehicle 1 is still captured. [0012]; if the road sensor 112 is implemented as an optical camera, then at step 520, the ECU 100 determines whether an entrance or exit is present based on the optical image of the HOV lane captured by the road sensor 112. This may be done by determining whether a road surface marker separating the HOV lane and an adjacent traffic lane indicates that entry or exit is permitted in accordance with local traffic regulations, for instance as shown in the examples of FIGS. 3 and 4. [0036]) and a representation of at least one additional lane of travel along the road segment (At step 520, the ECU 100 controls the road sensor 112 to detect the HOV lane in connection with the received lane change request, i.e., the target HOV lane for the subject vehicle 1 to enter or exit. For example, if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. [0035]); analyze the at least one image to identify an attribute associated with the at least one additional lane of travel (the road sensor 112 to detect the HOV lane in connection with the received lane change request, i.e., the target HOV lane for the subject vehicle 1 to enter or exit. For example, if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. Then, the process continues to step 530. [0035]), wherein the attribute is indicative of a lane use restriction for the at least one additional lane of travel (if the road sensor 112 is implemented as an optical camera, then at step 520, the ECU 100 determines whether an entrance or exit is present based on the optical image of the HOV lane captured by the road sensor 112. This may be done by determining whether a road surface marker separating the HOV lane and an adjacent traffic lane indicates that entry or exit is permitted in accordance with local traffic regulations, for instance as shown in the examples of FIGS. 3 and 4. [0036]); determine, based on the attribute, information indicative of a first characterization of the at least one additional lane of travel (The road surface marker 38 is a pair of solid lines, indicating that entry into the HOV lane 34 from the traffic lane 32 is not permitted. In other words, the presence of the road surface marker 38 corresponds to the non-presence of an entrance of the HOV lane 34. [0029]; At step 620, the ECU 100 determines whether the number of passengers in the subject vehicle 1 is less than a predetermined number of passengers required by local regulations for a vehicle to enter the destination lane of the lane change request. For example, if the destination lane is an HOV lane (i.e., the lane change request is a request for the subject vehicle 1 to enter an HOV lane), the predetermined number of passengers may be two, or three, depending on local regulations. If the destination lane is a standard traffic lane, the predetermined number of passengers is zero, i.e., no requirement as to the number of passengers. In this case, the determination at step 620 is always “NO” (since the number of passengers cannot be below zero). [0043]) and a second characterization of the at least one additional lane of travel associated with the lane use restriction (the road surface marker 36 is a dashed line, indicating that entry into the HOV lane 34 from the traffic lane 32 is permitted. In other words, the presence of the road surface marker 36 corresponds to the presence of an entrance of the HOV lane 34 [0028]) Takama does not explicitly teach, however Mizoguchi teaches: and send the information indicative of change in lane use restriction of the at least one additional lane of travel to a server for use in updating a road navigation model (The forward traveling environment recognized by the forward traveling environment recognizer 21d is read by the local dynamic map setting/updating unit 12a and updates the map information of the dynamic information (the quasi-static information layer, the quasi-dynamic information layer, the dynamic information layer) stored in the local dynamic map 17a of the map database 17 on the real-time basis. Therefore, since sequential update is performed also by the information obtained by the forward traveling environment recognizer 21d of the camera unit 21 (information that the traffic jam extending to the traveling lane 101a of a main lane 101 from the direction of a branch lane 102 which is a target travel path has been cleared, that the lane restriction has been already cancelled, that the traffic jam caused by the lane restriction has been cleared, or that the end of the traffic jam extends to the own vehicle M side or the like), the latest road map information is obtained at all times in the dynamic map of the vicinity of the own vehicle's location and the vicinity of the target travel path read by the road map information acquirer 12d. [0054]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama to include the teachings as taught by Mizoguchi with a reasonable expectation of success. Mizoguchi teaches the benefit of “The cloud server 1 processes the traffic information received in a time series from each of the traffic information centers 2 on a real-time basis and integrally manages the road traffic information stored in the global dynamic map 6 by sequentially updating it [Mizoguchi, 0033]” and “In the dynamic information layer 6d, information with the most changes and needed to be updated on the real-time basis is stored. Such information is obtained on the real-time basis by inter-vehicle communication, road-vehicle communication, and pedestrian-vehicle communication. The information includes signal indication (lighted color) information, railroad crossing gate information, information on vehicle traveling straight in intersection, information on pedestrian/bicycle in intersection and the like. Since the dynamic information needs to be obtained on the real-time basis, it is updated in a cycle within one second [Mizoguchi, 0037]”. Takama in view of Mizoguchi does not explicitly teach, however Nishibashi teaches: determine, based on the attribute (It is then checked to see whether an HOV lane exists ahead of the vehicle or on the route (step ST4c) [0082]), information indicative of a first characterization of the at least one additional lane of travel (fig. 5 showing beginning point of entrance/exit of HOV lane) associated with the lane use restriction at a first distance relative to the host vehicle (when it is determined in step ST4c that an HOV lane exists ahead of the vehicle or on the route, the position of a lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and a normal lane is then acquired (step ST5c) [0083]) and a second characterization of the at least one additional lane of travel (fig. 5 showing ending point of entrance/exit of HOV lane) associated with the lane use restriction at a second distance relative to the vehicle (The length (distance) of the lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and the normal lane is then acquired (step ST6c) [0083]), the first and second characterizations being output by a trained model (the software-based method will be explained in detail. FIG. 20 shows a case in which a lane changing permitted section in which the vehicle is permitted to make a lane change between an HOV lane and a normal lane exists ahead of the vehicle [0167]); wherein the server is configured to compare the first characterization with the second characterization (It is then checked to see whether or not the lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and the normal lane is long (step ST7c) [0083]) to identify a change in lane use restriction of the at least one additional lane of travel (an entrance and exit extracting unit for extracting an entrance and exit section which exists ahead of a vehicle and in which the vehicle is permitted to make a lane change between a special lane and a normal lane on a basis of the map data acquired by the map data acquiring unit and position information about the vehicle [0010]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama and Mizoguchi to include the teachings as taught by Nishibashi with a reasonable expectation of success. Nishibashi teaches the benefit of “A route search processing unit carries out a route search process in consideration of the enable or disable of use of an HOV lane by using the map data stored in the data buffer. When providing route guidance using an HOV lane, an HOV lane guidance unit provides guidance on a certain lane change with an image and a voice at the time that an entrance or exit point at which the user's vehicle should change its traveling direction falls within a predetermined distance from the position of the vehicle [Nishibashi, 0003]”. Regarding claim 39: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama further teaches: wherein the at least one additional lane of travel is adjacent to the lane of travel of the host vehicle (see at least fig. 3a and 3b showing adjacent lanes to vehicle 1.). Regarding claim 40: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 39, upon which this claim is dependent. Takama further teaches: wherein the at least one additional lane of travel is one or more lanes away from the lane of travel of the host vehicle (see at least figs. 4a and 4b showing vehicle 1 in the far left lane and capable of detecting the far right lane.). Mizoguchi further teaches: wherein the at least one additional lane of travel is one or more lanes away from the lane of travel of the host vehicle (see at least fig. 9b showing vehicle in lane 101b assessing and attempting to move into branching lane 102). Regarding claim 41: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama further teaches: wherein determining at least one of the information indicative of the first characterization of the at least one additional lane of travel or the information indicative of the second characterization of the at least one additional lane of travel further includes analyzing historical information (The layers 6a to 6d are divided in accordance with a degree of change (variation) on a time axis, and the information in each of the layers 6a to 6d is sequentially updated in each period (time) determined in advance. That is, since the information in the static information layer 6a has small change, the information is updated in a cycle within one month. In the quasi-static information layer 6b, information on the state of matters to be changed is scheduled or predicted in advance is stored. The information has more changes than the information in the static information layer 6a but has the smallest change among the information in the dynamic information layers. The information includes a schedule of lane restriction due to a construction work, seasonal scheduled restriction for events, traffic jam prediction, wide-area weather forecast, and the like. Since this quasi-static information has smaller dynamic change, it is updated in a cycle within one hour. [0035]). Regarding claim 43: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 41, upon which this claim is dependent. Mizoguchi further teaches: wherein the historical information is indicative of a presence of a lane split in the environment of the host vehicle (if the target travel path on which the own vehicle M is to automatically travel along the traveling lane 101a based on the road traffic information on the vicinity of the own vehicle and the vicinity of the target travel path obtained by the road traffic information acquirer 23a is set to the direction of the branch lane 102 connecting the traveling lane 101a and an exit of the main lane or another main lane, the vehicle control calculator 23b calculates timing when the own vehicle M is made to change the lane from the traveling lane 101a to the direction of the branch lane 102.). Regarding claim 44: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama further teaches: wherein the attribute further includes at least a portion of the at least one image associated with the at least one additional lane of travel (At step 520, the ECU 100 controls the road sensor 112 to detect the HOV lane in connection with the received lane change request, i.e., the target HOV lane for the subject vehicle 1 to enter or exit. For example, if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. Then, the process continues to step 530. [0035]). Regarding claim 45: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama further teaches: wherein the attribute includes a descriptor indicative of the lane use restriction (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane; the road information stored on the memory device of the location sensor 114 includes information describing the presence of an HOV lane as well as local regulations associated with the HOV lanes. Examples of local regulations will be described later. The location sensor 114 detects the current location of the subject vehicle 1 with the GPS receiver and outputs this data to the ECU 100 along with the road information stored on the memory device. In other words, the location sensor 114 outputs information about an HOV lane, similar to the road sensor 112. [0024]), the descriptor including at least one of a text descriptor (examiner is interpreting this limitation in the alternative), a directional arrow (examiner is interpreting this limitation in the alternative), or a symbol (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane). Regarding claim 49: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 48, upon which this claim is dependent. Takama further teaches: wherein the descriptor includes a symbol (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane), and wherein the symbol includes at least one of an HOV diamond (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane), a bicycle icon (examiner is taking this limitation in the alternative but notes that bike icons are known in the art for denoting a bike lane.), [a dashed lane marking], a double dashed lane marking, a solid lane marking (a road surface marker separating an HOV lane and a traffic lane may be a single solid line (indicating that entry or exit is not permitted), a dashed line paired with a solid line (indicating that only one of entry or exit is permitted), or any other type of marker consistent with local traffic regulations. [0032]), a double solid lane marking (in FIG. 3B, the adjacent HOV lane 34 is separated from the traffic lane 32 by a road surface marker 38. The road surface marker 38 is a pair of solid lines, indicating that entry into the HOV lane 34 from the traffic lane 32 is not permitted. In other words, the presence of the road surface marker 38 corresponds to the non-presence of an entrance of the HOV lane 34. [0029]), [a color], or [a directional sign]. Mizoguchi further teaches: a dashed lane marking (the road surface marker 36 is a dashed line, indicating that entry into the HOV lane 34 from the traffic lane 32 is permitted. In other words, the presence of the road surface marker 36 corresponds to the presence of an entrance of the HOV lane 34. [0028]), a double dashed lane marking (a road surface marker separating an HOV lane and a traffic lane may be a single solid line (indicating that entry or exit is not permitted), a dashed line paired with a solid line (indicating that only one of entry or exit is permitted), or any other type of marker consistent with local traffic regulations. [0032]; examiner notes that double dashed lines are known traffic markings in the art.), Regarding claim 56: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Mizoguchi further teaches: wherein the attribute is associated with an observed motion of a target vehicle (The position of the vehicle (Pe) at the end of the traffic jam is acquired based on the road traffic information read at step S2. However if the position of the vehicle (Pe) can be acquired from the traveling environment image information obtained by the camera unit 21, the distance from the vehicle (Pe) to the own vehicle M is calculated based on the latest traveling environment image information. [0105]; As a result, since detour routes r1 and r2 for traveling on the passing lane 101b are automatically set for the original target travel path set to the traveling lane 101a indicated by the one-dot chain line in FIG. 11, the traffic jam is detoured, and the automatic driving can be continued without stopping the automatic driving. [0110]; examiner notes that the stopped vehicle Pe (observed motion of zero) is what is used to determine that lane is blocked and make the lane adjustment.). Regarding claim 63: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama further teaches: wherein the lane use restriction includes an HOV lane use restriction (the road information stored on the memory device of the location sensor 114 includes information describing the presence of an HOV lane as well as local regulations associated with the HOV lanes. Examples of local regulations will be described later. The location sensor 114 detects the current location of the subject vehicle 1 with the GPS receiver and outputs this data to the ECU 100 along with the road information stored on the memory device. In other words, the location sensor 114 outputs information about an HOV lane, similar to the road sensor 112. [0024]), a bus lane use restriction (examiner is taking this limitation in the alternative but notes that a bus lane is a known restricted use lane.), or a bicycle lane use restriction (examiner is taking this limitation in the alternative but notes that a bike lane is a known restricted use lane.) Regarding claim 66: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Mizoguchi further teaches: wherein at least one of the information indicative of the first characterization of the at least one additional lane of travel or the information indicative of the second characterization of the at least one additional lane of travel includes a lane direction (it is desirable to provide an automatic driving assist apparatus which can smoothly lead the own vehicle to the branch lane direction by detouring the lane restricted section without stopping the automatic driving, and cumbersomeness felt by the driver can be alleviated, even if the target travel path of the own vehicle is set to the branch lane direction from the traveling lane and the lane restricted section is set on the target travel path before the branch lane. [0030]; the acceleration/deceleration controller 33 and the steering controller 31 are operated, and the own vehicle M is made to travel along the target travel path in the branch lane 102 direction. After the lane change control is finished, the ordinary automatic driving is performed along the target travel path. [0082). Regarding claim 67: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 66, upon which this claim is dependent. Mizoguchi further teaches: wherein the lane direction includes left, right, or straight (see figs. Showing the branching lane 102 going off to the left.; the acceleration/deceleration controller 33 and the steering controller 31 are operated, and the own vehicle M is made to travel along the target travel path in the branch lane 102 direction. After the lane change control is finished, the ordinary automatic driving is performed along the target travel path. [0082]). Regarding claim 73: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Mizoguchi further teaches: wherein at least one of the information indicative of the first characterization of the at least one additional lane of travel or the information indicative of the second characterization of the at least one additional lane of travel includes a spatial relationship between the at least one additional lane of travel and the lane of travel of the host vehicle (When the own vehicle M is to change the lane from the traveling lane 101a to the branch lane 102, the lane change can be accomplished by one route change as indicated by the arrow in FIG. 7. On the other hand, if the own vehicle M is to change the lane from the passing lane 101b to the branch lane 102 direction as illustrated in FIG. 9, since the own vehicle M needs to travel on the traveling lane 101a once, the route change needs to be performed twice. As a result, a distance required for the lane change becomes longer in the case of traveling on the passing lane 101b than the case of traveling on the traveling lane 101a. [0075]). Regarding claim 74: Takama teaches: A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause at least one processor (The ECU 100 is a processor which includes a central processing unit (CPU) 102 and a memory 104. The CPU 102 is preferably a microcomputer or microprocessor. The memory 104 is preferably a semiconductor memory such as random access memory (RAM), read only memory (ROM), flash memory, of a combination of these. The memory 104 has stored thereon instructions which program the CPU 102 to perform a variety of tasks as will be described later. [0015]) to: receive at least one image captured by a camera (the road sensor 112 is implemented as an optical camera which captures an optical signal of the road on which the subject vehicle 1 is travelling [0023]) from an environment of the host vehicle (The road sensor 112 outputs the optical signal of the road to the ECU 100. The road sensor 112 may be implemented as a front camera that acquires an optical signal of the road in front of the subject vehicle 1, or a surround view camera that acquires an optical signal of the road around the subject vehicle 1. In the case of a surround view camera, the road in front of the subject vehicle 1 is still captured. [0023]); analyze the at least one image to identity (if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. [0035]) a representation of a lane of travel of the host vehicle along a road segment (The road sensor 112 may be implemented as a front camera that acquires an optical signal of the road in front of the subject vehicle 1, or a surround view camera that acquires an optical signal of the road around the subject vehicle 1. In the case of a surround view camera, the road in front of the subject vehicle 1 is still captured. [0012]; if the road sensor 112 is implemented as an optical camera, then at step 520, the ECU 100 determines whether an entrance or exit is present based on the optical image of the HOV lane captured by the road sensor 112. This may be done by determining whether a road surface marker separating the HOV lane and an adjacent traffic lane indicates that entry or exit is permitted in accordance with local traffic regulations, for instance as shown in the examples of FIGS. 3 and 4. [0036]) and a representation of at least one additional lane of travel along the road segment (At step 520, the ECU 100 controls the road sensor 112 to detect the HOV lane in connection with the received lane change request, i.e., the target HOV lane for the subject vehicle 1 to enter or exit. For example, if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. [0035]); analyze the at least one image to identify an attribute associated with the at least one additional lane of travel (the road sensor 112 to detect the HOV lane in connection with the received lane change request, i.e., the target HOV lane for the subject vehicle 1 to enter or exit. For example, if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. Then, the process continues to step 530. [0035]), wherein the attribute includes a lane use restriction for the at least one additional lane of travel (if the road sensor 112 is implemented as an optical camera, then at step 520, the ECU 100 determines whether an entrance or exit is present based on the optical image of the HOV lane captured by the road sensor 112. This may be done by determining whether a road surface marker separating the HOV lane and an adjacent traffic lane indicates that entry or exit is permitted in accordance with local traffic regulations, for instance as shown in the examples of FIGS. 3 and 4. [0036]); determine, based on the attribute, information indicative of a first characterization of the at least one additional lane of travel (The road surface marker 38 is a pair of solid lines, indicating that entry into the HOV lane 34 from the traffic lane 32 is not permitted. In other words, the presence of the road surface marker 38 corresponds to the non-presence of an entrance of the HOV lane 34. [0029]; At step 620, the ECU 100 determines whether the number of passengers in the subject vehicle 1 is less than a predetermined number of passengers required by local regulations for a vehicle to enter the destination lane of the lane change request. For example, if the destination lane is an HOV lane (i.e., the lane change request is a request for the subject vehicle 1 to enter an HOV lane), the predetermined number of passengers may be two, or three, depending on local regulations. If the destination lane is a standard traffic lane, the predetermined number of passengers is zero, i.e., no requirement as to the number of passengers. In this case, the determination at step 620 is always “NO” (since the number of passengers cannot be below zero). [0043]); and Takama does not explicitly teach, however Mizoguchi teaches: send the information indicative of change in lane use restriction of the at least one additional lane of travel to a server for use in updating a road navigation model (The forward traveling environment recognized by the forward traveling environment recognizer 21d is read by the local dynamic map setting/updating unit 12a and updates the map information of the dynamic information (the quasi-static information layer, the quasi-dynamic information layer, the dynamic information layer) stored in the local dynamic map 17a of the map database 17 on the real-time basis. Therefore, since sequential update is performed also by the information obtained by the forward traveling environment recognizer 21d of the camera unit 21 (information that the traffic jam extending to the traveling lane 101a of a main lane 101 from the direction of a branch lane 102 which is a target travel path has been cleared, that the lane restriction has been already cancelled, that the traffic jam caused by the lane restriction has been cleared, or that the end of the traffic jam extends to the own vehicle M side or the like), the latest road map information is obtained at all times in the dynamic map of the vicinity of the own vehicle's location and the vicinity of the target travel path read by the road map information acquirer 12d. [0054]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama to include the teachings as taught by Mizoguchi with a reasonable expectation of success. Mizoguchi teaches the benefit of “The cloud server 1 processes the traffic information received in a time series from each of the traffic information centers 2 on a real-time basis and integrally manages the road traffic information stored in the global dynamic map 6 by sequentially updating it [Mizoguchi, 0033]” and “In the dynamic information layer 6d, information with the most changes and needed to be updated on the real-time basis is stored. Such information is obtained on the real-time basis by inter-vehicle communication, road-vehicle communication, and pedestrian-vehicle communication. The information includes signal indication (lighted color) information, railroad crossing gate information, information on vehicle traveling straight in intersection, information on pedestrian/bicycle in intersection and the like. Since the dynamic information needs to be obtained on the real-time basis, it is updated in a cycle within one second [Mizoguchi, 0037]”. Takama in view of Mizoguchi does not explicitly teach, however Nishibashi teaches: determine, based on the attribute (It is then checked to see whether an HOV lane exists ahead of the vehicle or on the route (step ST4c) [0082]), information indicative of a first characterization of the at least one additional lane of travel (fig. 5 showing beginning point of entrance/exit of HOV lane) associated with the lane use restriction at a first distance relative to the host vehicle (when it is determined in step ST4c that an HOV lane exists ahead of the vehicle or on the route, the position of a lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and a normal lane is then acquired (step ST5c) [0083]) and a second characterization of the at least one additional lane of travel (fig. 5 showing ending point of entrance/exit of HOV lane) associated with the lane use restriction at a second distance relative to the vehicle (The length (distance) of the lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and the normal lane is then acquired (step ST6c) [0083]), the first and second characterizations being output by a trained model (the software-based method will be explained in detail. FIG. 20 shows a case in which a lane changing permitted section in which the vehicle is permitted to make a lane change between an HOV lane and a normal lane exists ahead of the vehicle [0167]); wherein the server is configured to compare the first characterization with the second characterization (It is then checked to see whether or not the lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and the normal lane is long (step ST7c) [0083]) to identify a change in lane use restriction of the at least one additional lane of travel (an entrance and exit extracting unit for extracting an entrance and exit section which exists ahead of a vehicle and in which the vehicle is permitted to make a lane change between a special lane and a normal lane on a basis of the map data acquired by the map data acquiring unit and position information about the vehicle [0010]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama and Mizoguchi to include the teachings as taught by Nishibashi with a reasonable expectation of success. Nishibashi teaches the benefit of “A route search processing unit carries out a route search process in consideration of the enable or disable of use of an HOV lane by using the map data stored in the data buffer. When providing route guidance using an HOV lane, an HOV lane guidance unit provides guidance on a certain lane change with an image and a voice at the time that an entrance or exit point at which the user's vehicle should change its traveling direction falls within a predetermined distance from the position of the vehicle [Nishibashi, 0003]”. Regarding claim 77: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 74, upon which this claim is dependent. Takama further teaches: wherein the lane use restriction includes an HOV lane use restriction (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane), a bus lane use restriction (examiner is taking this limitation in the alternative but notes that bus lanes are known in the art.), or a bicycle lane use restriction (examiner is taking this limitation in the alternative but notes that bike icons are known in the art for denoting a bike lane.). Regarding claim 78: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 74, upon which this claim is dependent. Takama further teaches: wherein the attribute includes a descriptor indicative of the lane use restriction (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane; the road information stored on the memory device of the location sensor 114 includes information describing the presence of an HOV lane as well as local regulations associated with the HOV lanes. Examples of local regulations will be described later. The location sensor 114 detects the current location of the subject vehicle 1 with the GPS receiver and outputs this data to the ECU 100 along with the road information stored on the memory device. In other words, the location sensor 114 outputs information about an HOV lane, similar to the road sensor 112. [0024]). Regarding claim 79: Takama teaches: A method for navigating a host vehicle (an autonomous navigation system of the subject vehicle 1 [0026]), the method comprising: receiving at least one image captured by a camera (the road sensor 112 is implemented as an optical camera which captures an optical signal of the road on which the subject vehicle 1 is travelling [0023]) from an environment of a host vehicle (The road sensor 112 outputs the optical signal of the road to the ECU 100. The road sensor 112 may be implemented as a front camera that acquires an optical signal of the road in front of the subject vehicle 1, or a surround view camera that acquires an optical signal of the road around the subject vehicle 1. In the case of a surround view camera, the road in front of the subject vehicle 1 is still captured. [0023]); analyzing the at least one image to identity (if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. [0035]) a representation of a lane of travel of the host vehicle along a road segment (The road sensor 112 may be implemented as a front camera that acquires an optical signal of the road in front of the subject vehicle 1, or a surround view camera that acquires an optical signal of the road around the subject vehicle 1. In the case of a surround view camera, the road in front of the subject vehicle 1 is still captured. [0012]; if the road sensor 112 is implemented as an optical camera, then at step 520, the ECU 100 determines whether an entrance or exit is present based on the optical image of the HOV lane captured by the road sensor 112. This may be done by determining whether a road surface marker separating the HOV lane and an adjacent traffic lane indicates that entry or exit is permitted in accordance with local traffic regulations, for instance as shown in the examples of FIGS. 3 and 4. [0036]) and a representation of at least one additional lane of travel along the road segment (At step 520, the ECU 100 controls the road sensor 112 to detect the HOV lane in connection with the received lane change request, i.e., the target HOV lane for the subject vehicle 1 to enter or exit. For example, if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. [0035]); analyzing the at least one image to identify an attribute associated with the at least one additional lane of travel (the road sensor 112 to detect the HOV lane in connection with the received lane change request, i.e., the target HOV lane for the subject vehicle 1 to enter or exit. For example, if the road sensor 112 is implemented as an optical camera, then at step 520, an image of the HOV lane ahead of the subject vehicle 1 is captured and analyzed. Then, the process continues to step 530. [0035]), wherein the attribute includes a lane use restriction for the at least one additional lane of travel (if the road sensor 112 is implemented as an optical camera, then at step 520, the ECU 100 determines whether an entrance or exit is present based on the optical image of the HOV lane captured by the road sensor 112. This may be done by determining whether a road surface marker separating the HOV lane and an adjacent traffic lane indicates that entry or exit is permitted in accordance with local traffic regulations, for instance as shown in the examples of FIGS. 3 and 4. [0036]); determining, based on the attribute, information indicative of a characterization of the at least one additional lane of travel (The road surface marker 38 is a pair of solid lines, indicating that entry into the HOV lane 34 from the traffic lane 32 is not permitted. In other words, the presence of the road surface marker 38 corresponds to the non-presence of an entrance of the HOV lane 34. [0029]; At step 620, the ECU 100 determines whether the number of passengers in the subject vehicle 1 is less than a predetermined number of passengers required by local regulations for a vehicle to enter the destination lane of the lane change request. For example, if the destination lane is an HOV lane (i.e., the lane change request is a request for the subject vehicle 1 to enter an HOV lane), the predetermined number of passengers may be two, or three, depending on local regulations. If the destination lane is a standard traffic lane, the predetermined number of passengers is zero, i.e., no requirement as to the number of passengers. In this case, the determination at step 620 is always “NO” (since the number of passengers cannot be below zero). [0043]); and Takama does not explicitly teach, however Mizoguchi teaches: sending the information indicative of the change in lane use restriction of the at least one additional lane of travel to a server for use in updating a road navigation model (The forward traveling environment recognized by the forward traveling environment recognizer 21d is read by the local dynamic map setting/updating unit 12a and updates the map information of the dynamic information (the quasi-static information layer, the quasi-dynamic information layer, the dynamic information layer) stored in the local dynamic map 17a of the map database 17 on the real-time basis. Therefore, since sequential update is performed also by the information obtained by the forward traveling environment recognizer 21d of the camera unit 21 (information that the traffic jam extending to the traveling lane 101a of a main lane 101 from the direction of a branch lane 102 which is a target travel path has been cleared, that the lane restriction has been already cancelled, that the traffic jam caused by the lane restriction has been cleared, or that the end of the traffic jam extends to the own vehicle M side or the like), the latest road map information is obtained at all times in the dynamic map of the vicinity of the own vehicle's location and the vicinity of the target travel path read by the road map information acquirer 12d. [0054]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama to include the teachings as taught by Mizoguchi with a reasonable expectation of success. Mizoguchi teaches the benefit of “The cloud server 1 processes the traffic information received in a time series from each of the traffic information centers 2 on a real-time basis and integrally manages the road traffic information stored in the global dynamic map 6 by sequentially updating it [Mizoguchi, 0033]” and “In the dynamic information layer 6d, information with the most changes and needed to be updated on the real-time basis is stored. Such information is obtained on the real-time basis by inter-vehicle communication, road-vehicle communication, and pedestrian-vehicle communication. The information includes signal indication (lighted color) information, railroad crossing gate information, information on vehicle traveling straight in intersection, information on pedestrian/bicycle in intersection and the like. Since the dynamic information needs to be obtained on the real-time basis, it is updated in a cycle within one second [Mizoguchi, 0037]”. Takama in view of Mizoguchi does not explicitly teach, however Nishibashi teaches: determine, based on the attribute (It is then checked to see whether an HOV lane exists ahead of the vehicle or on the route (step ST4c) [0082]), information indicative of a first characterization of the at least one additional lane of travel (fig. 5 showing beginning point of entrance/exit of HOV lane) associated with the lane use restriction at a first distance relative to the host vehicle (when it is determined in step ST4c that an HOV lane exists ahead of the vehicle or on the route, the position of a lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and a normal lane is then acquired (step ST5c) [0083]) and a second characterization of the at least one additional lane of travel (fig. 5 showing ending point of entrance/exit of HOV lane) associated with the lane use restriction at a second distance relative to the vehicle (The length (distance) of the lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and the normal lane is then acquired (step ST6c) [0083]), the first and second characterizations being output by a trained model (the software-based method will be explained in detail. FIG. 20 shows a case in which a lane changing permitted section in which the vehicle is permitted to make a lane change between an HOV lane and a normal lane exists ahead of the vehicle [0167]); wherein the server is configured to compare the first characterization with the second characterization (It is then checked to see whether or not the lane changing permitted section in which the vehicle is permitted to make a lane change between the HOV lane and the normal lane is long (step ST7c) [0083]) to identify a change in lane use restriction of the at least one additional lane of travel (an entrance and exit extracting unit for extracting an entrance and exit section which exists ahead of a vehicle and in which the vehicle is permitted to make a lane change between a special lane and a normal lane on a basis of the map data acquired by the map data acquiring unit and position information about the vehicle [0010]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama and Mizoguchi to include the teachings as taught by Nishibashi with a reasonable expectation of success. Nishibashi teaches the benefit of “A route search processing unit carries out a route search process in consideration of the enable or disable of use of an HOV lane by using the map data stored in the data buffer. When providing route guidance using an HOV lane, an HOV lane guidance unit provides guidance on a certain lane change with an image and a voice at the time that an entrance or exit point at which the user's vehicle should change its traveling direction falls within a predetermined distance from the position of the vehicle [Nishibashi, 0003]”. Regarding claim 82: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 79, upon which this claim is dependent. Takama further teaches: wherein the lane use restriction includes an HOV lane use restriction (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane), a bus lane use restriction (examiner is taking this limitation in the alternative but notes that bus lanes are known in the art.), or a bicycle lane use restriction (examiner is taking this limitation in the alternative but notes that bike icons are known in the art for denoting a bike lane.). Regarding claim 83: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 79, upon which this claim is dependent. Takama further teaches: wherein the attribute includes a descriptor indicative of the lane use restriction (see at least figs. 3 and 4 showing the diamond that denotes an HOV lane; the road information stored on the memory device of the location sensor 114 includes information describing the presence of an HOV lane as well as local regulations associated with the HOV lanes. Examples of local regulations will be described later. The location sensor 114 detects the current location of the subject vehicle 1 with the GPS receiver and outputs this data to the ECU 100 along with the road information stored on the memory device. In other words, the location sensor 114 outputs information about an HOV lane, similar to the road sensor 112. [0024]). Regarding claim 86: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Nishibashi further teaches: wherein the first distance relative to the host vehicle and the second distance relative to the vehicle are based on predetermined distances (whether or not the distance of the lane changing permitted section acquired in step ST6c is equal to or longer than 3 miles is checked to see [0094]). Regarding claim 87: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Nishibashi further teaches: wherein the memory further includes instructions that when executed by the circuity cause the at least one processor to access a stored characterization of the at least one additional lane of travel at a location along the additional lane of travel corresponding to the first distance relative to the host vehicle (The guidance time adjusting unit 39 adjusts a guidance time on the basis of information showing that the current position has been determined to be a position where a guidance has to be provided from both the current position detected by the current position detecting unit 32 and the route shown by the route data stored in the route storage unit 34, pieces of HOV lane related information acquired from the HOV lane determining unit 35, the HOV lane entrance and exit extracting unit 36, and the passable or impassable determining unit 37, the information acquired from the forwardly-existing sign detecting unit 38, and so on [0057]), the stored characterization of the at least one additional lane of travel having been determined previously by the host vehicle (The position of the sign can be acquired by using, for example, a method of acquiring the sign position from information stored in a map data storage unit 16 [0153]). Regarding claim 88: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 87, upon which this claim is dependent. Nishibashi further teaches: the memory further includes instructions that when executed by the circuity cause the at least one processor to confirm the first characterization of the at least one additional lane of travel based on the stored characterization (The control part 30 checks to see whether or not an HOV lane exists ahead of the vehicle or on the route on the basis of the result of the determination acquired by the HOV lane determining unit 35 [0082]). Regarding claim 89: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 87, upon which this claim is dependent. Mizoguchi further teaches: wherein the stored characterization of the at least one additional lane of travel was determined by the host vehicle when the host vehicle was at the second distance relative to the location (When it is determined at step S11 that the lane restricted section 103a is set to the traveling lane 101a before the entrance of the branch lane 102 and the routine branches to step S14, whether the reaching distance L1 from the own vehicle M to the target spot has reached a lane change start distance L3 to the passing lane 101b is examined [0083]). Claim(s) 42, 57-61, 64-65, 68-72, 75-76, 80-81, and 84 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takama et. al. (US 2019/0225265), herein Takama in view of Mizoguchi (US 2020/0180639), herein Mizoguchi and Nishibashi et. al. (US 2013/0103304), herein Nishibashi in further view of Pham et. al. (US 2020/0341466), herein Pham. Regarding claim 42: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 41, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the historical information includes information determined from analysis of at least one historical image captured prior to capturing the at least one image (During training, the DNN may be trained with images or other sensor data representations labeled or annotated with line segments representing lanes, crosswalks, entry-lines, exit-lines, bike lanes, etc., and may further include semantic information corresponding thereto. [0027]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 57: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 56, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the target vehicle is observed approaching the host vehicle (Referring to FIG. 6B, FIG. 6B illustrates example paths 600B generated by a path generator (e.g., path generator 516 of FIG. 5) based on the intersection structure 600A predicted by a neural network (e.g., machine learning model(s) 104 of FIG. 5). Paths 614A-614G are potential paths generated by connecting key points 602A-602V predicted by the neural network in FIG. 6A. The paths 614A-614G may correspond to paths for the ego-vehicle 800 and/or other vehicles or objects. The paths for other vehicles or objects may be informative to the vehicle 800 to help in determining potential future locations of other vehicles or objects as they traverse the intersection. [0090]; AEB systems detect an impending forward collision with another vehicle or other object, and may automatically apply the brakes if the driver does not take corrective action within a specified time or distance parameter. AEB systems may use front-facing camera(s) and/or RADAR sensor(s) 860 [0185]; examiner notes that fig. 3A depicts the system’s ability to detect and display oncoming traffic at the intersection.). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 58: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 56, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the target vehicle is observed approaching an intersection (Referring to FIG. 6B, FIG. 6B illustrates example paths 600B generated by a path generator (e.g., path generator 516 of FIG. 5) based on the intersection structure 600A predicted by a neural network (e.g., machine learning model(s) 104 of FIG. 5). Paths 614A-614G are potential paths generated by connecting key points 602A-602V predicted by the neural network in FIG. 6A. The paths 614A-614G may correspond to paths for the ego-vehicle 800 and/or other vehicles or objects. The paths for other vehicles or objects may be informative to the vehicle 800 to help in determining potential future locations of other vehicles or objects as they traverse the intersection. [0090]; AEB systems detect an impending forward collision with another vehicle or other object, and may automatically apply the brakes if the driver does not take corrective action within a specified time or distance parameter. AEB systems may use front-facing camera(s) and/or RADAR sensor(s) 860 [0185]; examiner notes that fig. 3A depicts the system’s ability to detect and display oncoming traffic at the intersection.)). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 59: Takama in view of Mizoguchi, Nishibashi and Pham teaches all the limitations of claim 58, upon which this claim is dependent. Pham further teaches: wherein the motion includes a turn direction of the target vehicle relative to the intersection (Referring to FIG. 6B, FIG. 6B illustrates example paths 600B generated by a path generator (e.g., path generator 516 of FIG. 5) based on the intersection structure 600A predicted by a neural network (e.g., machine learning model(s) 104 of FIG. 5). Paths 614A-614G are potential paths generated by connecting key points 602A-602V predicted by the neural network in FIG. 6A. The paths 614A-614G may correspond to paths for the ego-vehicle 800 and/or other vehicles or objects. The paths for other vehicles or objects may be informative to the vehicle 800 to help in determining potential future locations of other vehicles or objects as they traverse the intersection. [0090]; Potential path types include left turn, right turn, switch lanes, and/or continue in lane. [0033]). Regarding claim 60: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 56, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the target vehicle is observed traveling away from the host vehicle (Referring to FIG. 6B, FIG. 6B illustrates example paths 600B generated by a path generator (e.g., path generator 516 of FIG. 5) based on the intersection structure 600A predicted by a neural network (e.g., machine learning model(s) 104 of FIG. 5). Paths 614A-614G are potential paths generated by connecting key points 602A-602V predicted by the neural network in FIG. 6A. The paths 614A-614G may correspond to paths for the ego-vehicle 800 and/or other vehicles or objects. The paths for other vehicles or objects may be informative to the vehicle 800 to help in determining potential future locations of other vehicles or objects as they traverse the intersection. [0090]; examiner notes that in figs 6A and 6B shows the other vehicle moving away from the host vehicle on the other side of the intersection.). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 61: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 56, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the motion includes a turn direction of the target vehicle (Referring to FIG. 6B, FIG. 6B illustrates example paths 600B generated by a path generator (e.g., path generator 516 of FIG. 5) based on the intersection structure 600A predicted by a neural network (e.g., machine learning model(s) 104 of FIG. 5). Paths 614A-614G are potential paths generated by connecting key points 602A-602V predicted by the neural network in FIG. 6A. The paths 614A-614G may correspond to paths for the ego-vehicle 800 and/or other vehicles or objects. The paths for other vehicles or objects may be informative to the vehicle 800 to help in determining potential future locations of other vehicles or objects as they traverse the intersection. [0090]; Potential path types include left turn, right turn, switch lanes, and/or continue in lane. [0033]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 64: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein at least one of the information indicative of the first characterization of the at least one additional lane of travel or the information indicative of the second characterization of the at least one additional lane of travel includes a lane orientation (FIG. 6A illustrates an example intersection structure prediction 600A (e.g., output(s) 106 of FIG. 5) generated using a neural network (e.g., machine learning model(s) 104 of FIG. 5), in accordance with some embodiments of the present disclosure. The prediction 600A includes a visualization of predicted line segments corresponding to lane classifications 604, 606, 608, 610, and 612, for each lane detected in the sensor data (e.g., sensor data 102 of FIG. 5). For example, the lane classification 604 may correspond to an entrance to a pedestrian crossing lane type, the lane classification 606 may correspond to an entrance to an intersection and/or an exit from a pedestrian crossing lane type, the lane classification 608 may correspond to an exit from a pedestrian crossing lane type, the lane classification 610 may correspond to an exit from an intersection and/or an entrance to a pedestrian crossing lane type, and the lane classifications 612 may correspond to a non-drivable lane type. Each line segment may also be associated with a center key point 506 and/or a corresponding heading direction(s) (e.g., key points and associated vectors 602A-602V). As such, the intersection structure and pose may be represented by a set of line segments with corresponding line classifications, key points, and/or heading directions. [0089]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 65: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 64, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the lane orientation includes an oncoming lane or a preceding lane (FIG. 6A illustrates an example intersection structure prediction 600A (e.g., output(s) 106 of FIG. 5) generated using a neural network (e.g., machine learning model(s) 104 of FIG. 5), in accordance with some embodiments of the present disclosure. The prediction 600A includes a visualization of predicted line segments corresponding to lane classifications 604, 606, 608, 610, and 612, for each lane detected in the sensor data (e.g., sensor data 102 of FIG. 5). For example, the lane classification 604 may correspond to an entrance to a pedestrian crossing lane type, the lane classification 606 may correspond to an entrance to an intersection and/or an exit from a pedestrian crossing lane type, the lane classification 608 may correspond to an exit from a pedestrian crossing lane type, the lane classification 610 may correspond to an exit from an intersection and/or an entrance to a pedestrian crossing lane type, and the lane classifications 612 may correspond to a non-drivable lane type. Each line segment may also be associated with a center key point 506 and/or a corresponding heading direction(s) (e.g., key points and associated vectors 602A-602V). As such, the intersection structure and pose may be represented by a set of line segments with corresponding line classifications, key points, and/or heading directions. [0089]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 68: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the memory further includes instructions that when executed by the circuity cause the at least one processor to determine a confidence level for at least one of the information indicative of the first characterization of the at least one additional lane of travel or the information indicative of the second characterization of the at least one additional lane of travel (the machine learning model(s) 104 may be trained to compute confidences corresponding to lane types 510 and/or other semantic information—e.g., using any suitable method for classification. [0080]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 69: Takama in view of Mizoguchi, Nishibashi and Pham teaches all the limitations of claim 68, upon which this claim is dependent. Pham further teaches: wherein the confidence level is determined using a trained system (the machine learning model(s) 104 may be trained to compute confidences corresponding to lane types 510 and/or other semantic information—e.g., using any suitable method for classification. [0080]). Regarding claim 70: Takama in view of Mizoguchi, Nishibashi and Pham teaches all the limitations of claim 69, upon which this claim is dependent. Pham further teaches: wherein the trained system includes a neural network (the machine learning model(s) 104 may be trained to compute confidences corresponding to lane types 510 and/or other semantic information—e.g., using any suitable method for classification. [0080]; The method 700, at block B704, includes computing, using the neural network and based at least in part on the image data, first data representative of one or more two-dimensional (2D) heat maps representing locations of key points corresponding to the intersection and second data representative of classification confidence values corresponding to the key points. For example, the machine learning model(s) 104 may compute output(s) 106 with heat map(s) 108 including key points corresponding to the intersection and confidence values corresponding to line segment classification. [0093]). Regarding claim 71: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi doesn’t teach, however Pham teaches: wherein analyzing the at least one image to identity the attribute associated with additional lane of travel includes analyzing the at least one image using a trained system (the machine learning model(s) 104 may be trained to compute confidences corresponding to lane types 510 and/or other semantic information—e.g., using any suitable method for classification. [0080]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 72: Takama in view of Mizoguchi, Nishibashi and Pham teaches all the limitations of claim 71, upon which this claim is dependent. Pham further teaches: wherein the trained system includes a neural network (the machine learning model(s) 104 may be trained to compute confidences corresponding to lane types 510 and/or other semantic information—e.g., using any suitable method for classification. [0080]; The method 700, at block B704, includes computing, using the neural network and based at least in part on the image data, first data representative of one or more two-dimensional (2D) heat maps representing locations of key points corresponding to the intersection and second data representative of classification confidence values corresponding to the key points. For example, the machine learning model(s) 104 may compute output(s) 106 with heat map(s) 108 including key points corresponding to the intersection and confidence values corresponding to line segment classification. [0093]). Regarding claim 75: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 74, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein determining at least one of the information indicative of the first characterization of the at least one additional lane of travel or the information indicative of the second characterization of the at least one additional lane of travel further includes analyzing historical information (During training, the DNN may be trained with images or other sensor data representations labeled or annotated with line segments representing lanes, crosswalks, entry-lines, exit-lines, bike lanes, etc., and may further include semantic information corresponding thereto. [0027]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 76: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 74, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the lane use restriction for the at least one additional lane of travel is identified based on a road sign represented in the at least one image, the road sign being associated with the additional lane of travel (a CNN executing on the DLA or dGPU (e.g., the GPU(s) 820) may include a text and word recognition, allowing the supercomputer to read and understand traffic signs, including signs for which the neural network has not been specifically trained. The DLA may further include a neural network that is able to identify, interpret, and provides semantic understanding of the sign, and to pass that semantic understanding to the path planning modules running on the CPU Complex.). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 80: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 79, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein determining at least one of the information indicative of the first characterization of the at least one additional lane of travel or the information indicative of the second characterization of the at least one additional lane of travel further includes analyzing historical information (During training, the DNN may be trained with images or other sensor data representations labeled or annotated with line segments representing lanes, crosswalks, entry-lines, exit-lines, bike lanes, etc., and may further include semantic information corresponding thereto. [0027]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 81: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 79, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the lane use restriction for the at least one additional lane of travel is identified based on a road sign represented in the at least one image, the road sign being associated with the additional lane of travel (a CNN executing on the DLA or dGPU (e.g., the GPU(s) 820) may include a text and word recognition, allowing the supercomputer to read and understand traffic signs, including signs for which the neural network has not been specifically trained. The DLA may further include a neural network that is able to identify, interpret, and provides semantic understanding of the sign, and to pass that semantic understanding to the path planning modules running on the CPU Complex.). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Regarding claim 84: Takama in view of Mizoguchi and Nishibashi teaches all the limitations of claim 38, upon which this claim is dependent. Takama in view of Mizoguchi and Nishibashi does not explicitly teach, however Pham teaches: wherein the lane use restriction for the at least one additional lane of travel is identified based on a road sign represented in the at least one image, the road sign being associated with the additional lane of travel (a CNN executing on the DLA or dGPU (e.g., the GPU(s) 820) may include a text and word recognition, allowing the supercomputer to read and understand traffic signs, including signs for which the neural network has not been specifically trained. The DLA may further include a neural network that is able to identify, interpret, and provides semantic understanding of the sign, and to pass that semantic understanding to the path planning modules running on the CPU Complex.). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Takama in view of Mizoguchi and Nishibashi to include the teachings as taught by Pham with a reasonable expectation of success. Pham teaches the benefit of “live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection. [Pham, abstract]”. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hyun (US 2019/0095809) discloses vehicle movement prediction method and apparatus for identifying a type of a target vehicle traveling in a target lane on a road and generating movement prediction information to predict a movement of the target vehicle based on the type of the target vehicle, wherein the movement is associated with the target lane. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott R Jagolinzer whose telephone number is (571)272-4180. The examiner can normally be reached M-Th 8AM - 4PM Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571)272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Scott R. Jagolinzer Examiner Art Unit 3665 /S.R.J./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Sep 22, 2021
Application Filed
Jan 29, 2024
Non-Final Rejection — §101, §103
Jun 24, 2024
Response Filed
Oct 30, 2024
Non-Final Rejection — §101, §103
May 05, 2025
Response Filed
Aug 23, 2025
Final Rejection — §101, §103
Dec 24, 2025
Request for Continued Examination
Jan 12, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §101, §103
Jan 28, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12492103
REMOTE OPERATION TERMINAL AND MOBILE CRANE COMPRISING REMOTE OPERATION TERMINAL
2y 5m to grant Granted Dec 09, 2025
Patent 12441318
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Oct 14, 2025
Patent 12344390
Method of Adjusting Directional Movement Ability in a Multi-Rotor Aircraft
2y 5m to grant Granted Jul 01, 2025
Patent 12304504
VEHICLE CONTROL SYSTEM
2y 5m to grant Granted May 20, 2025
Patent 12216018
SYSTEM AND METHOD FOR MOVING MATERIAL
2y 5m to grant Granted Feb 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
41%
Grant Probability
60%
With Interview (+19.2%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 110 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month