Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
The status of claims 1-42 is:
Claims 1-42 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/20/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Applicant is advised that should claim 2 be found allowable, claim 40 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim Rejections - 35 USC § 103
Claim(s) 1-3, 7-8, 14-17, 24-27, 31-33, and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan et al. (U.S. Patent Publication No 2021/0350147, hereinafter “Yuan”) in view of Liu et al. (U.S. Patent Publication No 2022/0111869 listed in the IDS received 02/20/2024, hereinafter “Liu”).
Regarding claim 1, Yuan discloses a system for predicting one or more drivable paths relative to at least one road segment (Yuan [0074]: “Routing module 307 is configured to provide one or more routes or paths from a starting point to a destination point”), the system comprising:
at least one processor (Yuan [0061]: “Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information”) programmed to:
access topographical information associated with the at least one road segment (Yuan [0074]: “Routing module 307 may generate a reference line in a form of a topographic map for each of the routes it determines from the starting location to reach the destination location. A reference line refers to an ideal route or path without any interference from others such as other vehicles, obstacles, or traffic condition. That is, if there is no other vehicle, pedestrians, or obstacles on the road, an ADV should exactly or closely follows the reference line”, the creation of the topographic map implies the use of topographical information);
generate a topographical representation of the at least one road segment based on the topographical information (Yuan [0074]: “Routing module 307 may generate a reference line in a form of a topographic map for each of the routes it determines from the starting location to reach the destination location. A reference line refers to an ideal route or path without any interference from others such as other vehicles, obstacles, or traffic condition. That is, if there is no other vehicle, pedestrians, or obstacles on the road, an ADV should exactly or closely follows the reference line”);
predict at least one drivable path relative to the at least one road segment based on the topographical representation of the at least one road segment (Yuan [0075]: “Based on a decision for each of the objects perceived, planning module 305 plans a path or route for the autonomous vehicle, as well as driving parameters (e.g., distance, speed, and/or turning angle), using a reference line provided by routing module 307 as a basis. That is, for a given object, decision module 304 decides what to do with the object, while planning module 305 determines how to do it”);
receive information identifying the at least one drivable path (Yuan [0076]: “Based on the planning and control data, control module 306 controls and drives the autonomous vehicle, by sending proper commands or signals to vehicle control system 111, according to a route or path defined by the planning and control data”);
store the information identifying the at least one drivable path in at least one map (Yuan [0076]: “Based on the planning and control data, control module 306 controls and drives the autonomous vehicle, by sending proper commands or signals to vehicle control system 111, according to a route or path defined by the planning and control data”, for the path to be utilized in this manner it must be stored so it can be used by the processor).
Yuan does not explicitly disclose the system wherein the processor is programmed to:
input at least the topographical representation of the at least one road segment to at least one trained model, wherein the at least one trained model includes a graph neural network and is configured to predict at least one drivable path relative to the at least one road segment based on the topographical representation of the at least one road segment.
However, Liu teaches the system wherein the processor is programmed to:
input at least the topographical representation of the at least one road segment to at least one trained model (Liu [0040]: “The refinement model 214 may thus generate complete, occlusion-aware semantic top-down views that correspond to arbitrary new perspective images. A mapping is learned from the initial semantic map, which places the pixels of the perspective image 204 into a three-dimensional space, to the complete semantic top-view map”; Liu [0041]: “Using the outputs of the attention model 212 and the refinement model 214, a relational graph model 216 uses, for example, a graph neural network to model the relations between different objects, as well as between the objects and features of the room layout. The relational graph model 216 outputs the parametric output 218, which may rely on an assumption that the use of a Cartesian grid for interior layouts leads to regularities in image edge gradient statistics. By modeling the relationships with graphs, consistent/coherent layout predictions may be generated. Thus, a relational graph may be generated for use as an input to the relational graph model 216, using spatial relationships identified in the refined top-down representation and attention information from the attention map”), wherein the at least one trained model includes a graph neural network (Liu [0041]: “Liu [0041]: “Using the outputs of the attention model 212 and the refinement model 214, a relational graph model 216 uses, for example, a graph neural network to model the relations between different objects, as well as between the objects and features of the room layout”) network and is configured to predict at least one drivable path relative to the area based on the topographical representation of the area (Liu Claim 11: “determining a parametric top-down representation of the scene using the relational graph representation as input to a relational graph neural network model; determining a path through the scene using the parametric top-down representation”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the graph neural network as taught by Liu with the system of Yuan because it would improve the accuracy of the system as neural networks can be trained for improvement using training data that allows it to be more accurate than other methods of analysis (Liu [0075]). This motivation for the combination of Yuan and Liu is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding claim 32, it is rejected under the same analysis as claim 1 above.
Regarding claim 33, it is rejected under the same analysis as claim 1 above along with Yuan’s disclosure of a non-transitory computer-readable medium storing instructions executable by at least one processor (Yuan [0008]: “In another aspect of the disclosure, a non-transitory machine-readable medium having instructions stored therein is provided”).
Regarding claim 2, Yuan discloses the system, wherein the at least one processor is further programmed to distribute the at least one map to at least one vehicle (Yuan [0076]: “Based on the planning and control data, control module 306 controls and drives the autonomous vehicle”).
Regarding claim 3, Yuan discloses the system, wherein the at least one vehicle is configured to navigate autonomously or semi-autonomously based on the at least one map (Yuan [0076]: “Based on the planning and control data, control module 306 controls and drives the autonomous vehicle, by sending proper commands or signals to vehicle control system 111, according to a route or path defined by the planning and control data. The planning and control data include sufficient information to drive the vehicle from a first point to a second point of a route or path using appropriate vehicle settings or driving parameters (e.g., throttle, braking, steering commands) at different points in time along the path or route”).
Regarding claim 7, Yuan discloses the system, wherein the topographical information includes LIDAR output provided by one or more LIDAR devices included in one or more vehicles that traversed the at least one road segment (Yuan [0071]: “Perception module 302 may include a computer vision system or functionalities of a computer vision system to process and analyze images captured by one or more cameras in order to identify objects and/or features in the environment of autonomous vehicle. The objects can include traffic signals, road way boundaries, other vehicles, pedestrians, and/or obstacles, etc. The computer vision system may use an object recognition algorithm, video tracking, and other computer vision techniques. In some embodiments, the computer vision system can map an environment, track objects, and estimate the speed of objects, etc. Perception module 302 can also detect objects based on other sensors data provided by other sensors such as a radar and/or LIDAR”), and wherein the topographical representation of the at least one road segment is generated based on the LIDAR output (Yuan [0071]: “Perception module 302 may include a computer vision system or functionalities of a computer vision system to process and analyze images captured by one or more cameras in order to identify objects and/or features in the environment of autonomous vehicle. The objects can include traffic signals, road way boundaries, other vehicles, pedestrians, and/or obstacles, etc. The computer vision system may use an object recognition algorithm, video tracking, and other computer vision techniques. In some embodiments, the computer vision system can map an environment, track objects, and estimate the speed of objects, etc. Perception module 302 can also detect objects based on other sensors data provided by other sensors such as a radar and/or LIDAR”).
Regarding claim 8, Yuan discloses the system, wherein the topographical information includes a map retrieved from a database (Yuan [0114]: “Referring to FIG. 11B, when a loop closure is detected 1111, target map or segments map generation process 1112 can generate a target map based on previously extract segments for the frames of the loop. Segments map or target map can be a database, a struct or, a class object storing a list of segments of the frames”), and wherein the topographical representation of the at least one road segment is generated based on the map (Yuan [0114]: “Referring to FIG. 11B, when a loop closure is detected 1111, target map or segments map generation process 1112 can generate a target map based on previously extract segments for the frames of the loop. Segments map or target map can be a database, a struct or, a class object storing a list of segments of the frames”).
Regarding claim 14, Yuan dose not explicitly disclose the system, wherein the at least one trained model is further configured to associate a plurality of nodes with the topographical representation and predict the at least one drivable path relative to the at least one road segment by predicting at least one connection between at least two of the plurality of nodes.
However, Liu teaches the system, wherein the at least one trained model is further configured to associate a plurality of nodes with the topographical representation (Liu [0042]: “This information can be encoded using nodes and edges in a relational graph, where the nodes represent objects and layout elements, and where the edges represent relationships between such nodes”) and predict the at least one drivable path by predicting at least one connection between at least two of the plurality of nodes (Liu Claim 11: “determining a parametric top-down representation of the scene using the relational graph representation as input to a relational graph neural network model; determining a path through the scene using the parametric top-down representation”).
It would have been obvious to combine the neural network of Liu with the system of Yuan for the same reasons used for claim 1 above.
Regarding claim 15, Yuan does not explicitly disclose the system, wherein the plurality of nodes are associated with a graph generated by the graph neural network.
However, Liu teaches the system, wherein the plurality of nodes are associated with a graph generated by the graph neural network (Liu [0042]: “The relational graph model 216 may operate in a manner similar to a convolutional neural network. Rather than being based on the proximity of pixels in a two-dimensional image, the relational graph model 216 regards objects within the interior scene as being related to one another by proximity in space or by semantic relationship”).
It would have been obvious to combine the neural network of Liu with the system of Yuan for the same reasons used for claim 1 above.
Regarding claim 16, Yuan does not explicitly disclose the system, wherein the graph incudes edges representing relationships between the plurality of nodes.
However, Liu teaches the system, wherein the graph incudes edges representing relationships between the plurality of nodes (Yuan [0042]: “This information can be encoded using nodes and edges in a relational graph, where the nodes represent objects and layout elements, and where the edges represent relationships between such nodes”).
It would have been obvious to combine the neural network of Liu with the system of Yuan for the same reasons used for claim 1 above.
Regarding claim 17, Yuan does not explicitly disclose the system, wherein the at least one trained model is further configured to associate at least one attribute with at least one of the plurality of nodes.
However, Liu teaches the system, wherein the at least one trained model is further configured to associate at least one attribute with at least one of the plurality of nodes (Liu [0042]: “The relational graph model 216 may operate in a manner similar to a convolutional neural network. Rather than being based on the proximity of pixels in a two-dimensional image, the relational graph model 216 regards objects within the interior scene as being related to one another by proximity in space or by semantic relationship”).
It would have been obvious to combine the neural network of Liu with the system of Yuan for the same reasons used for claim 1 above.
Regarding claim 24, Yuan discloses the system, wherein the at least one road segment includes a divided road segment (Yuan [0070]: “The perception can include the lane configuration, traffic light signals, a relative position of another vehicle, a pedestrian, a building, crosswalk, or other traffic related signs (e.g., stop signs, yield signs), etc., for example, in a form of an object. The lane configuration includes information describing a lane or lanes, such as, for example, a shape of the lane (e.g., straight or curvature), a width of the lane, how many lanes in a road, one-way or two-way lane, merging or splitting lanes, exiting lane, etc.”).
Regarding claim 25, Yuan discloses the system, wherein the at least one road segment includes a plurality of travel lanes (Yuan [0070]: “The perception can include the lane configuration, traffic light signals, a relative position of another vehicle, a pedestrian, a building, crosswalk, or other traffic related signs (e.g., stop signs, yield signs), etc., for example, in a form of an object. The lane configuration includes information describing a lane or lanes, such as, for example, a shape of the lane (e.g., straight or curvature), a width of the lane, how many lanes in a road, one-way or two-way lane, merging or splitting lanes, exiting lane, etc.”).
Regarding claim 26, Yuan discloses the system, wherein the at least one road segment includes at least one of a roundabout, lane split, or lane merge (Yuan [0070]: “The perception can include the lane configuration, traffic light signals, a relative position of another vehicle, a pedestrian, a building, crosswalk, or other traffic related signs (e.g., stop signs, yield signs), etc., for example, in a form of an object. The lane configuration includes information describing a lane or lanes, such as, for example, a shape of the lane (e.g., straight or curvature), a width of the lane, how many lanes in a road, one-way or two-way lane, merging or splitting lanes, exiting lane, etc.”).
Regarding claim 27, Yuan does not explicitly disclose the system, wherein the at least one trained model is trained based on a plurality of images.
However, Liu teaches the system, wherein the at least one trained model is trained based on a plurality of images (Liu [0049]: “Block 310 may train the refinement model 216. The refinement model training information may include training images which are associated with corresponding top-down views of a same interior scene”).
It would have been obvious to combine the neural network of Liu with the system of Yuan for the same reasons used for claim 1 above.
Regarding claim 31, Yuan does not explicitly disclose the system, wherein the at least one trained model is trained based on map information.
However, Liu teaches the system, wherein the at least one trained model is trained based on map information (Liu [0050]: “Block 312 may train the relational graph model 218. The relational graph training information may include information about the top-down view of an interior scene, along with a corresponding attention map that provides positional relationship image from a perspective view of the same scene”).
It would have been obvious to combine the neural network of Liu with the system of Yuan for the same reasons used for claim 1 above.
Regarding claim 40, Yuan discloses the system, wherein the at least one processor is further programmed to distribute the at least one map to at least one vehicle (Yuan [0076]: “Based on the planning and control data, control module 306 controls and drives the autonomous vehicle”).
Allowable Subject Matter
Claims 34 and 41-42 are allowed.
Claims 4-6, 9-13, 18-23, 28-30, and 35-39 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nikola et al. (U.S. Patent Publication No 2021/0166340) discloses a method, apparatus, and computer program for generating road maps using real-time inputs from sensors (Nikola Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN KEUP whose telephone number is (703)756-4578. The examiner can normally be reached Monday - Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AIDAN KEUP/ Examiner, Art Unit 2666 /Molly Wilburn/Primary Examiner, Art Unit 2666