Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a Non-Final Action on the Merits. Claims 1-27 and 30-43 are currently pending and are addressed below.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on July 16th, 2025 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on December 12th, 2025 has been considered and entered.
Response to Amendments
The amendments filed on July 16th, 2025 have been considered and entered. Accordingly, claims 1, 34, and 35 have been amended.
Response to Arguments
The applicant states (Amend. 14) that Doron (WO 2018175441 A1) (“Doron”) (Attached) “is silent regarding any form of identifier of the at least one navigational map segment … does not receive, as an input, “a representation of the positions of the one or more detected object relative to the at least one image”. The examiner respectfully disagrees. Doron teaches in paragraph 185 that for each training image provided a prestored path of the vehicle is provided. In each image, future paths are outlined based on objects identified in each image. Furthermore, Doron teaches the use of identified objects in images and their relationship in the image to use with a trained system to determine an estimated position of the vehicle (See at least Doron Paragraph 181-185 “For each one of the first plurality of training images, a prestored path of the vehicle ahead of a respective present location of the vehicle can be obtained (block 420). Reference is now additionally made to FIG. 9A-9C, which are graphical illustrations of features of the method of processing images to provide a trained system which is capable of estimating a future path ahead of a current location of a vehicle based on an image captured at the current location, in accordance with examples of the presently disclosed subject matter … Typically, a very large number of images are provided to the trained system during the training phase, and for each image a prestored path of the vehicle ahead of a respective present location of the vehicle is provided. The prestored path can be obtained by recording the future locations of the vehicle along the road on which the vehicle was traveling while the image was captured. In another example, the prestored path can be generated manually or using image processing by identifying, visually or algorithmically, various objects in the road or in a vicinity of the road, which indicate a location of the vehicle on the road. The location of the vehicle on the road can be the actual location of the vehicle on the road during the session when the image was captured” | Paragraphs 192-196 “Reference is made to FIG. 10, which is a graphical illustration of certain aspects of the method of estimating a future path ahead of a current location of a vehicle, according to examples of the presently disclosed subject matter. As is shown in FIG. 10, a vehicle 1010 is entering a section of road 1020. The road 1020 is an arbitrary road, and images from the road 1020 may or may not have been used in the training of the system (e.g., a neural network, deep learning system, etc.). The vehicle 1010 includes a camera (not shown) which captures images. The images captured by the camera on board the vehicle 1010 may or may not be cropped, or processed in any other way (e.g., down sampled) before being fed to the trained system. In FIG. 10, an image is illustrated by cone 630 which represents the FOV of the camera mounted in vehicle 1010. The image depicts arbitrary objects in the FOV of the camera. The image can, but does not necessarily, include road objects, such as road signs, lane marks, curbs, other vehicles, etc. The image can include other arbitrary objects, such as structures and trees at the sides of the road, etc. The trained system can be applied to the image 1030 of the environment ahead of the current arbitrary location of the vehicle 1010, and can provide an estimated future path of the vehicle 1010 ahead of the current arbitrary location. In FIG. 10, the estimated future path is denoted by pins 1041-1047. FIGS. 16A, 16B, 17A, and 17B further illustrate images including the estimated future paths 1610, 1620, 1710, and 1720 consistent with the disclosed embodiments … In some embodiments, the estimated future path of the vehicle ahead of the current location can be further based on identifying one or more predefined objects appearing in the image of the environment using at least one classifier”).
The applicant states (Amend. 14-15) that Ogale (US 20200174490 A1) (“Ogale”) “does not output a “second estimated position … represented as an error correction to be applied to the first estimated position””. The examiner respectfully disagrees. Ogale teaches determining a position of a vehicle at a current time step based on navigational data provided to a trained system and with an error correction applied (See at least Ogale Paragraphs 32, 69).
Furthermore, the Applicant states a broad interpretation of Doron and Ogale and states that the references do not teach the limitations of amended independent claim 1 and 34-35, such an assertion amounts to no more than reciting the disputed limitations and generally alleging that the cited prior art references are deficient. Merely pointing out certain claim features recited in independent claim 1 and nakedly asserting that none of the cited prior art references teach or suggest such features does not amount to a separate patentability argument. Attorney arguments that are conclusory in nature, i.e., providing no further substantive explanation or evidence in support is afforded little weight. See In re Geisler, 116 F.3d 1465, 1470 (Fed. Cir. 1997). See also Enzo Biochem, Inc. v. Gen-Probe, Inc., 424 F.3d 1276, 1284 (Fed. Cir. 2005) (“Attorney argument is no substitute for evidence.”). Furthermore, arguments of counsel cannot take the place of factually supported objective evidence. See, e.g., In re Huang, 100 F.3d 135, 139-40, 40 USPQ2d 1685, 1689 (Fed. Cir. 1996); In re De Blauwe, 736 F.2d 699, 705, 222 USPQ 191, 196 (Fed. Cir. 1984; Accord M.P.E.P. 2145. In addition, the arguments of counsel cannot take the place of evidence in the record. In re Schulze, 346 F.2d 600, 602, 145 USPQ 716, 718 (CCPA 1965); In re Geisler, 116 F.3d 1465, 43 USPQ2d 1362 (Fed. Cir. 1997) ("An assertion of what seems to follow from common experience is just attorney argument and not the kind of factual evidence that is required to rebut a prima facie case of obviousness.").
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 21-26, and 31-37 are rejected under 35 U.S.C. 103 as being unpatentable over Browning (US 20180005050 A1) (“Browning”) in view of in view of Doron (WO 2018175441 A1) (“Doron”) (Attached) in view of Ogale (US 20200174490 A1) (“Ogale”) in view of Shashua (US 20170010106 A1) (“Shashua”).
With respect to claim 1, Browning Teaches a navigation system for a host vehicle (Browning Abstract “A storage system may be provided with a vehicle to store a collection of submaps that represent a geographic area where the vehicle may be driven”), the system comprising: at least one processor comprising circuity and a memory (Processor and Memory: See at least Browning Paragraphs 23, and 111), wherein the memory includes instructions that when executed by the circuity cause the at least one processing device to:
receive at least one image representative of an 5environment of the host vehicle, the at least one image having been captured by an image capture device (Browning Images of a vehicle’s environment are obtained: See at least Browning Paragraph 19 | Paragraph 95 | Paragraph 157);
analyze the at least one image to detect a presence of one or more objects represented in the at least one image (Analyzing images to detect presence of objects: See at least Browning Paragraphs 156);
determine position information relating to the one or more detected objects based on the analysis of the at least one image, 10compare the position information, relating to the one or more detected objects, to location information for one or more mapped objects represented in at least one navigational map segment of a plurality of navigational map segments (Determining position based on detected objects in map segments: See at least Browning Paragraph 153);
based on the comparison, determine a first estimated position of the host vehicle relative to the at least one navigational map segment by aligning the position information determined for the one or more detected objects with the location information for the one or more mapped objects included in the at least one navigational segment (Determine vehicle location on map segment: See at least Browning Figs. 12-14 and Paragraphs 104, 155, 163, 168-169);
and cause the host vehicle to implement the determined navigational action (Vehicle is autonomously operated based on determine actions: See at least Browning Paragraph 186).
Browning, however, fails to explicitly disclose provide to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image. the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment; using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with training images, and error data representing a degree of misalignment between the training images and the training navigational map segment; and receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle; determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle.
Doron, however, teaches providing to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image, the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment, the trained system being trained to generate position information using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with the training images, and error data representing a degree of misalignment between the training images and the training navigational map segment (See at least Doron FIG. 8 and Paragraphs 180-185 | Paragraphs 192-196 “Reference is made to FIG. 10, which is a graphical illustration of certain aspects of the method of estimating a future path ahead of a current location of a vehicle, according to examples of the presently disclosed subject matter. As is shown in FIG. 10, a vehicle 1010 is entering a section of road 1020. The road 1020 is an arbitrary road, and images from the road 1020 may or may not have been used in the training of the system (e.g., a neural network, deep learning system, etc.). The vehicle 1010 includes a camera (not shown) which captures images. The images captured by the camera on board the vehicle 1010 may or may not be cropped, or processed in any other way (e.g., down sampled) before being fed to the trained system. In FIG. 10, an image is illustrated by cone 630 which represents the FOV of the camera mounted in vehicle 1010. The image depicts arbitrary objects in the FOV of the camera. The image can, but does not necessarily, include road objects, such as road signs, lane marks, curbs, other vehicles, etc. The image can include other arbitrary objects, such as structures and trees at the sides of the road, etc. The trained system can be applied to the image 1030 of the environment ahead of the current arbitrary location of the vehicle 1010, and can provide an estimated future path of the vehicle 1010 ahead of the current arbitrary location. In FIG. 10, the estimated future path is denoted by pins 1041-1047. FIGS. 16A, 16B, 17A, and 17B further illustrate images including the estimated future paths 1610, 1620, 1710, and 1720 consistent with the disclosed embodiments … In some embodiments, the estimated future path of the vehicle ahead of the current location can be further based on identifying one or more predefined objects appearing in the image of the environment using at least one classifier”).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning to include providing to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image, the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment, the trained system being trained to generate position information using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with the training images, and error data representing a degree of misalignment between the training images and the training navigational map segment, as taught by Doron as disclosed above, in order to ensure the map segment is accurate when providing information to the vehicle (Doron Paragraph 2 “The present disclosure relates generally to advanced driver assistance systems (ADAS), and autonomous vehicle (AV) systems”).
Browning in view of Doron fail to explicitly disclose receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle; determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle.
Ogale, however, teaches receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle (See at least Ogale FIG. 4 and 9 and Paragraph 32 “Some implementations of the subject matter disclosed herein include a computer-implemented method for training a trajectory planning neural network system to determine waypoints for trajectories of vehicles. The method can include obtaining, by a neural network training system, multiple training data sets. Each training data set can include: (i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps, (ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and (iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps. The neural network training system can train the trajectory planning neural network system on the multiple training data sets, including, for each training data set of the multiple training data sets: processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle; determining an output error using the target output and the set of output scores, and adjusting the current values of the parameters of the trajectory planning neural network system using the output error.” | Paragraph 47 “For a group of training data sets selected from the multiple training data sets, the training system can: for each training data set in the group of training data sets, processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a respective set of output scores for the training data set; determining the output error using the target outputs and the respective sets of output scores of all the training data sets in the group of training data sets; and adjusting the current values of the parameters of the trajectory planning neural network system using the output error” | Paragraph 69 “At each time step, the neural network system 102 processes a neural network input that includes waypoint data 108. The waypoint data 108 identifies a set of previous locations of the vehicle before the current time step. The previous locations identified by the waypoint data 108 can be previously traveled locations of the vehicle before the current time step (i.e., actual locations at which the vehicle was recently located), planned locations of the vehicle before the current time step (i.e., waypoints in the planned trajectory that have already been generated (predicted) at time steps before the current time step), or both. For example, the neural network system 102 may take part in generating a planned trajectory for a vehicle that includes 20 waypoints, where each waypoint in the planned trajectory represents a planned location of the vehicle at a respective time step in a series of time steps (i.e., one time step for each waypoint). At the first time step, t1, all of the locations identified by the waypoint data 108 may be previously traveled locations at which the vehicle was actually driven at one or more time steps before t1. After the first time step (e.g., at time steps t2 through t20), the waypoint data 108 may identify each of the waypoints (i.e., planned locations) from t1 through the most recent time step that immediately precedes the current time step. For instance, at time step t9, the waypoint data 108 may identify each of the planned locations of the vehicle from t1 through t8.” | Paragraphs 96-100 “At stage 402, a trajectory management system, e.g., trajectory management system 114, obtains waypoint data, environmental data, and navigation data for a current time step in a series of time steps of a planned trajectory for a vehicle. The waypoint data, e.g., waypoint data 108, identifies a set of previous locations of the vehicle, which may include previously traveled locations of the vehicle, planned locations of the vehicle (i.e., waypoints of the planned trajectory from preceding time steps), or both traveled locations and planned locations of the vehicle … At stage 404, the trajectory management system generates a first neural network input from the waypoint data. The first neural network input characterizes the waypoint data in a format that is suitable for a neural network system, e.g., neural network system 102, to process … At stage 406, one or more encoder neural networks generate a second neural network input from the environmental data and the navigation data … At stage 410, a trajectory management system, e.g., waypoint selector 116 of trajectory management system 114, selects a waypoint for the planned trajectory at the current time step. The waypoint can be selected based on the set of scores generated by the trajectory planning neural network. In some implementations, the waypoint selector selects a location as the waypoint for the current time step as a result of the score for the selected location indicating that it is the most optimal waypoint location among the set of possible locations (e.g., the location with the highest score).”).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning in view of Doron to include receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle, as taught by Ogale as disclosed above, in order to ensure accurate localization of a vehicle (Ogale Paragraph 1 “This specification describes a computer-implemented neural network system configured to plan a trajectory for a vehicle.”).
Browning in view of Doron in view of Ogale fail to explicitly disclose determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle
Shashua, however, teaches determining a navigational action based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle (Multiple images are used to determine positions of moving vehicle, the positions are then used to determine a navigational action for the vehicle: See at least Shashua Paragraph 6).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning in view of Doron in view of Ogale to include determining a navigational action based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle, as taught by Shashua as disclosed above, in order to provide optimal navigational instructions by ensuring an accurate position of the vehicle (Shashua Paragraph 3 “Additionally, this disclosure relates to systems and methods for constructing, using, and updating the sparse map for autonomous vehicle navigation”).
With respect to claim 2, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the position information includes a location of the detected one or more objects in coordinates of the at least one navigational map segment (Identification of coordinates of the objects around the vehicle: See at least Browning Paragraph 43).
With respect to claim 3, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the at least one navigational map segment includes a plurality of target trajectories associated with lanes of travel along a roadway represented by the at least one navigational map segment (Identification of lane-level movements of a vehicle: See at least Browning Paragraph 43, 80).
With respect to claim 4, Browning in view of Doron in view of Ogale in view of Shashua teach wherein each of the plurality of target trajectories is represented in the at least one navigational map segment as a three-dimensional spline (Trajectories are represented as 3D splines: See at least Shashua Paragraph 124).
With respect to claim 5, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the plurality of target trajectories are determined based on drive information acquired from a plurality of vehicles during prior traversals of the roadway by the plurality of vehicles (Trajectories are determined from past vehicle data: See at least Browning Paragraph 22).
With respect to claim 21, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the execution of the instructions included in the memory further cause the at least one processor to combine the first estimated position and the second estimated position to determine a refined estimated position of the host vehicle relative to the at least one navigational map segment (Shashua Paragraph 658 “In other embodiments, processing unit 110 may determine predicted location 3774 of vehicle 200 after time “t” based on following a trajectory predicted using holistic path prediction methods. In some exemplary embodiments, processing unit 110 may determine predicted location 3774 of vehicle 200 after time “t” by applying weights to some or all of the above-described cues. For example, processing unit 110 may determine the location of vehicle 200 after time “t” as a weighted combination of the locations predicted based on one or more of a left lane mark polynomial model, a right lane mark polynomial model, holistic path prediction, motion of a forward vehicle, determined free space ahead of the autonomous vehicle, and virtual lanes. Processing unit 110 may use current location 3712 of vehicle 200 and predicted location 3774 after time “t” to determine heading direction 3730 for vehicle 200.”).
With respect to claim 22, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the refined estimated position is determined by applying 15a first weight value to the first estimated position and applying a second weight value to the second estimated position (Weights are applied to estimated locations: See at least Shashua Paragraph 658).
With respect to claim 23, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the refined estimated position of the host vehicle includes a position of the host vehicle along a target trajectory included in the at least one navigational map segment (Position of the vehicle is estimated based on the position along its trajectory: See at least Shashua Paragraph 658).
With respect to claim 24, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the target trajectory is associated with an available lane of travel (Trajectory is based on lane a vehicle can travel on: See at least Browning Paragraph 43)
With respect to claim 25 Browning in view of Doron in view of Ogale in view of Shashua teach wherein the target trajectory is represented as a three-dimensional spline in the at least one navigational map segment (Shashua Paragraph 124 “In some embodiments of the system, the predetermined model representative of at least one road segment may include a three-dimensional spline representing a predetermined path of travel along the at least one road segment. The update to the predetermined model may include an update to the three-dimensional spline representing a predetermined path of travel along the at least one road segment.”).
With respect to claim 26, Browning in view of Doron in view of Ogale in view of Shashua teach wherein analyzing the at least one image to detect the presence of 25the one or more objects represented in the at least one image includes identifying at least one object based on edge identification or shape (Detecting object is based on shape: See at least Browning Paragraphs 152-153).
With respect to claim 31, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the detected one or more objects include one or more of traffic lights, signs, road edges, lane marks, or poles (Objects include traffic lights, signs, roadway features: See at least Browning Paragraph 169).
With respect to claim 32, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the trained system includes a neural network (Neural network is used: See at least Shashua Paragraph 95).
With respect to claim 33, Browning in view of Doron in view of Ogale in view of Shashua teach wherein the navigational action includes at least one of accelerating the host vehicle, decelerating the host vehicle, or turning the host vehicle (Action for a vehicle includes acceleration, deceleration, and turning: See at least Shashua Paragraph 606)
With respect to claim 34, Browning Teaches a method for vehicle navigation (Browning Abstract “A storage system may be provided with a vehicle to store a collection of submaps that represent a geographic area where the vehicle may be driven”), the method comprising:
receiving at least one image representative of an 5environment of the host vehicle, the at least one image having been captured by an image capture device (Browning Images of a vehicle’s environment are obtained: See at least Browning Paragraph 19 | Paragraph 95 | Paragraph 157);
analyzing the at least one image to detect a presence of one or more objects represented in the at least one image (Analyzing images to detect presence of objects: See at least Browning Paragraphs 156);
determine position information relating to the one or more detected objects based on the analysis of the at least one image, 10compare the position information, relating to the one or more detected objects, to location information for one or more mapped objects represented in at least one navigational map segment of a plurality of navigational map segments (Determining position based on detected objects in map segments: See at least Browning Paragraph 153);
based on the comparison, determine a first estimated position of the host vehicle relative to the at least one navigational map segment by aligning the position information determined for the one or more detected objects with the location information for the one or more mapped objects included in the at least one navigational segment (Determine vehicle location on map segment: See at least Browning Figs. 12-14 and Paragraphs 104, 155, 163, 168-169);
and cause the host vehicle to implement the determined navigational action (Vehicle is autonomously operated based on determine actions: See at least Browning Paragraph 186).
Browning, however, fails to explicitly disclose provide to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image. the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment; using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with training images, and error data representing a degree of misalignment between the training images and the training navigational map segment; and receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle; determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle.
Doron, however, teaches providing to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image, the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment, the trained system being trained to generate position information using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with the training images, and error data representing a degree of misalignment between the training images and the training navigational map segment (See at least Doron FIG. 8 and Paragraphs 180-185 | Paragraphs 192-196 “Reference is made to FIG. 10, which is a graphical illustration of certain aspects of the method of estimating a future path ahead of a current location of a vehicle, according to examples of the presently disclosed subject matter. As is shown in FIG. 10, a vehicle 1010 is entering a section of road 1020. The road 1020 is an arbitrary road, and images from the road 1020 may or may not have been used in the training of the system (e.g., a neural network, deep learning system, etc.). The vehicle 1010 includes a camera (not shown) which captures images. The images captured by the camera on board the vehicle 1010 may or may not be cropped, or processed in any other way (e.g., down sampled) before being fed to the trained system. In FIG. 10, an image is illustrated by cone 630 which represents the FOV of the camera mounted in vehicle 1010. The image depicts arbitrary objects in the FOV of the camera. The image can, but does not necessarily, include road objects, such as road signs, lane marks, curbs, other vehicles, etc. The image can include other arbitrary objects, such as structures and trees at the sides of the road, etc. The trained system can be applied to the image 1030 of the environment ahead of the current arbitrary location of the vehicle 1010, and can provide an estimated future path of the vehicle 1010 ahead of the current arbitrary location. In FIG. 10, the estimated future path is denoted by pins 1041-1047. FIGS. 16A, 16B, 17A, and 17B further illustrate images including the estimated future paths 1610, 1620, 1710, and 1720 consistent with the disclosed embodiments … In some embodiments, the estimated future path of the vehicle ahead of the current location can be further based on identifying one or more predefined objects appearing in the image of the environment using at least one classifier”).
It would have been obvious to one of ordinary skill in the art to have modified the method of Browning to include providing to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image, the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment, the trained system being trained to generate position information using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with the training images, and error data representing a degree of misalignment between the training images and the training navigational map segment, as taught by Doron as disclosed above, in order to ensure the map segment is accurate when providing information to the vehicle (Doron Paragraph 2 “The present disclosure relates generally to advanced driver assistance systems (ADAS), and autonomous vehicle (AV) systems”).
Browning in view of Doron fail to explicitly disclose receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle; determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle.
Ogale, however, teaches receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle (See at least Ogale FIG. 4 and 9 and Paragraph 32 “Some implementations of the subject matter disclosed herein include a computer-implemented method for training a trajectory planning neural network system to determine waypoints for trajectories of vehicles. The method can include obtaining, by a neural network training system, multiple training data sets. Each training data set can include: (i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps, (ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and (iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps. The neural network training system can train the trajectory planning neural network system on the multiple training data sets, including, for each training data set of the multiple training data sets: processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle; determining an output error using the target output and the set of output scores, and adjusting the current values of the parameters of the trajectory planning neural network system using the output error.” | Paragraph 47 “For a group of training data sets selected from the multiple training data sets, the training system can: for each training data set in the group of training data sets, processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a respective set of output scores for the training data set; determining the output error using the target outputs and the respective sets of output scores of all the training data sets in the group of training data sets; and adjusting the current values of the parameters of the trajectory planning neural network system using the output error” | Paragraph 69 “At each time step, the neural network system 102 processes a neural network input that includes waypoint data 108. The waypoint data 108 identifies a set of previous locations of the vehicle before the current time step. The previous locations identified by the waypoint data 108 can be previously traveled locations of the vehicle before the current time step (i.e., actual locations at which the vehicle was recently located), planned locations of the vehicle before the current time step (i.e., waypoints in the planned trajectory that have already been generated (predicted) at time steps before the current time step), or both. For example, the neural network system 102 may take part in generating a planned trajectory for a vehicle that includes 20 waypoints, where each waypoint in the planned trajectory represents a planned location of the vehicle at a respective time step in a series of time steps (i.e., one time step for each waypoint). At the first time step, t1, all of the locations identified by the waypoint data 108 may be previously traveled locations at which the vehicle was actually driven at one or more time steps before t1. After the first time step (e.g., at time steps t2 through t20), the waypoint data 108 may identify each of the waypoints (i.e., planned locations) from t1 through the most recent time step that immediately precedes the current time step. For instance, at time step t9, the waypoint data 108 may identify each of the planned locations of the vehicle from t1 through t8.” | Paragraphs 96-100 “At stage 402, a trajectory management system, e.g., trajectory management system 114, obtains waypoint data, environmental data, and navigation data for a current time step in a series of time steps of a planned trajectory for a vehicle. The waypoint data, e.g., waypoint data 108, identifies a set of previous locations of the vehicle, which may include previously traveled locations of the vehicle, planned locations of the vehicle (i.e., waypoints of the planned trajectory from preceding time steps), or both traveled locations and planned locations of the vehicle … At stage 404, the trajectory management system generates a first neural network input from the waypoint data. The first neural network input characterizes the waypoint data in a format that is suitable for a neural network system, e.g., neural network system 102, to process … At stage 406, one or more encoder neural networks generate a second neural network input from the environmental data and the navigation data … At stage 410, a trajectory management system, e.g., waypoint selector 116 of trajectory management system 114, selects a waypoint for the planned trajectory at the current time step. The waypoint can be selected based on the set of scores generated by the trajectory planning neural network. In some implementations, the waypoint selector selects a location as the waypoint for the current time step as a result of the score for the selected location indicating that it is the most optimal waypoint location among the set of possible locations (e.g., the location with the highest score).”).
It would have been obvious to one of ordinary skill in the art to have modified the method of Browning in view of Doron to include receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle, as taught by Ogale as disclosed above, in order to ensure accurate localization of a vehicle (Ogale Paragraph 1 “This specification describes a computer-implemented neural network system configured to plan a trajectory for a vehicle.”).
Browning in view of Doron in view of Ogale fail to explicitly disclose determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle
Shashua, however, teaches determining a navigational action based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle (Multiple images are used to determine positions of moving vehicle, the positions are then used to determine a navigational action for the vehicle: See at least Shashua Paragraph 6).
It would have been obvious to one of ordinary skill in the art to have modified the method of Browning in view of Doron in view of Ogale to include determining a navigational action based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle, as taught by Shashua as disclosed above, in order to provide optimal navigational instructions by ensuring an accurate position of the vehicle (Shashua Paragraph 3 “Additionally, this disclosure relates to systems and methods for constructing, using, and updating the sparse map for autonomous vehicle navigation”).
With respect to claim 35, Browning Teaches a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause at least one processor to perform a method for navigating a vehicle, the method comprising (Browning Abstract “A storage system may be provided with a vehicle to store a collection of submaps that represent a geographic area where the vehicle may be driven”), receiving at least one image representative of an 5environment of the host vehicle, the at least one image having been captured by an image capture device (Browning Images of a vehicle’s environment are obtained: See at least Browning Paragraph 19 | Paragraph 95 | Paragraph 157);
analyzing the at least one image to detect a presence of one or more objects represented in the at least one image (Analyzing images to detect presence of objects: See at least Browning Paragraphs 156);
determine position information relating to the one or more detected objects based on the analysis of the at least one image, 10compare the position information, relating to the one or more detected objects, to location information for one or more mapped objects represented in at least one navigational map segment of a plurality of navigational map segments (Determining position based on detected objects in map segments: See at least Browning Paragraph 153);
based on the comparison, determine a first estimated position of the host vehicle relative to the at least one navigational map segment by aligning the position information determined for the one or more detected objects with the location information for the one or more mapped objects included in the at least one navigational segment (Determine vehicle location on map segment: See at least Browning Figs. 12-14 and Paragraphs 104, 155, 163, 168-169);
and cause the host vehicle to implement the determined navigational action (Vehicle is autonomously operated based on determine actions: See at least Browning Paragraph 186).
Browning, however, fails to explicitly disclose provide to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image. the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment; using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with training images, and error data representing a degree of misalignment between the training images and the training navigational map segment; and receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle; determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle.
Doron, however, teaches providing to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image, the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment, the trained system being trained to generate position information using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with the training images, and error data representing a degree of misalignment between the training images and the training navigational map segment (See at least Doron FIG. 8 and Paragraphs 180-185 | Paragraphs 192-196 “Reference is made to FIG. 10, which is a graphical illustration of certain aspects of the method of estimating a future path ahead of a current location of a vehicle, according to examples of the presently disclosed subject matter. As is shown in FIG. 10, a vehicle 1010 is entering a section of road 1020. The road 1020 is an arbitrary road, and images from the road 1020 may or may not have been used in the training of the system (e.g., a neural network, deep learning system, etc.). The vehicle 1010 includes a camera (not shown) which captures images. The images captured by the camera on board the vehicle 1010 may or may not be cropped, or processed in any other way (e.g., down sampled) before being fed to the trained system. In FIG. 10, an image is illustrated by cone 630 which represents the FOV of the camera mounted in vehicle 1010. The image depicts arbitrary objects in the FOV of the camera. The image can, but does not necessarily, include road objects, such as road signs, lane marks, curbs, other vehicles, etc. The image can include other arbitrary objects, such as structures and trees at the sides of the road, etc. The trained system can be applied to the image 1030 of the environment ahead of the current arbitrary location of the vehicle 1010, and can provide an estimated future path of the vehicle 1010 ahead of the current arbitrary location. In FIG. 10, the estimated future path is denoted by pins 1041-1047. FIGS. 16A, 16B, 17A, and 17B further illustrate images including the estimated future paths 1610, 1620, 1710, and 1720 consistent with the disclosed embodiments … In some embodiments, the estimated future path of the vehicle ahead of the current location can be further based on identifying one or more predefined objects appearing in the image of the environment using at least one classifier”).
It would have been obvious to one of ordinary skill in the art to have modified the method of Browning to include providing to a trained system at least a portion of the at least one image and at least an identifier of the at least one navigational map segment, and representations of the determined position information for the one or more detected objects relative to the at least one image, the identifier indicating which of the plurality of navigational map segments corresponds to the at least one navigational map segment, the trained system being trained to generate position information using a training data set that includes a plurality of training images, a training navigational map segment representing an environment associated with the training images, and error data representing a degree of misalignment between the training images and the training navigational map segment, as taught by Doron as disclosed above, in order to ensure the map segment is accurate when providing information to the vehicle (Doron Paragraph 2 “The present disclosure relates generally to advanced driver assistance systems (ADAS), and autonomous vehicle (AV) systems”).
Browning in view of Doron fail to explicitly disclose receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle; determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle.
Ogale, however, teaches receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle (See at least Ogale FIG. 4 and 9 and Paragraph 32 “Some implementations of the subject matter disclosed herein include a computer-implemented method for training a trajectory planning neural network system to determine waypoints for trajectories of vehicles. The method can include obtaining, by a neural network training system, multiple training data sets. Each training data set can include: (i) a first training input that characterizes a set of waypoints that represent respective locations of a vehicle at each of a series of first time steps, (ii) a second training input that characterizes at least one of (a) environmental data that represents a current state of an environment of the vehicle or (b) navigation data that represents a planned navigation route for the vehicle, and (iii) a target output characterizing a waypoint that represents a target location of the vehicle at a second time step that follows the series of first time steps. The neural network training system can train the trajectory planning neural network system on the multiple training data sets, including, for each training data set of the multiple training data sets: processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a set of output scores, each output score corresponding to a respective location of a set of possible locations in a vicinity of the vehicle; determining an output error using the target output and the set of output scores, and adjusting the current values of the parameters of the trajectory planning neural network system using the output error.” | Paragraph 47 “For a group of training data sets selected from the multiple training data sets, the training system can: for each training data set in the group of training data sets, processing the first training input and the second training input according to current values of parameters of the trajectory planning neural network system to generate a respective set of output scores for the training data set; determining the output error using the target outputs and the respective sets of output scores of all the training data sets in the group of training data sets; and adjusting the current values of the parameters of the trajectory planning neural network system using the output error” | Paragraph 69 “At each time step, the neural network system 102 processes a neural network input that includes waypoint data 108. The waypoint data 108 identifies a set of previous locations of the vehicle before the current time step. The previous locations identified by the waypoint data 108 can be previously traveled locations of the vehicle before the current time step (i.e., actual locations at which the vehicle was recently located), planned locations of the vehicle before the current time step (i.e., waypoints in the planned trajectory that have already been generated (predicted) at time steps before the current time step), or both. For example, the neural network system 102 may take part in generating a planned trajectory for a vehicle that includes 20 waypoints, where each waypoint in the planned trajectory represents a planned location of the vehicle at a respective time step in a series of time steps (i.e., one time step for each waypoint). At the first time step, t1, all of the locations identified by the waypoint data 108 may be previously traveled locations at which the vehicle was actually driven at one or more time steps before t1. After the first time step (e.g., at time steps t2 through t20), the waypoint data 108 may identify each of the waypoints (i.e., planned locations) from t1 through the most recent time step that immediately precedes the current time step. For instance, at time step t9, the waypoint data 108 may identify each of the planned locations of the vehicle from t1 through t8.” | Paragraphs 96-100 “At stage 402, a trajectory management system, e.g., trajectory management system 114, obtains waypoint data, environmental data, and navigation data for a current time step in a series of time steps of a planned trajectory for a vehicle. The waypoint data, e.g., waypoint data 108, identifies a set of previous locations of the vehicle, which may include previously traveled locations of the vehicle, planned locations of the vehicle (i.e., waypoints of the planned trajectory from preceding time steps), or both traveled locations and planned locations of the vehicle … At stage 404, the trajectory management system generates a first neural network input from the waypoint data. The first neural network input characterizes the waypoint data in a format that is suitable for a neural network system, e.g., neural network system 102, to process … At stage 406, one or more encoder neural networks generate a second neural network input from the environmental data and the navigation data … At stage 410, a trajectory management system, e.g., waypoint selector 116 of trajectory management system 114, selects a waypoint for the planned trajectory at the current time step. The waypoint can be selected based on the set of scores generated by the trajectory planning neural network. In some implementations, the waypoint selector selects a location as the waypoint for the current time step as a result of the score for the selected location indicating that it is the most optimal waypoint location among the set of possible locations (e.g., the location with the highest score).”).
It would have been obvious to one of ordinary skill in the art to have modified the method of Browning in view of Doron to include receive, from the trained system, an output generated based on the at least a portion of the at least one image and the identifier, the output indicating a second estimated position of the host vehicle relative to the at least one navigational map segment, the second estimated position being represented an error correction to be applied to the first estimated location of the host vehicle, as taught by Ogale as disclosed above, in order to ensure accurate localization of a vehicle (Ogale Paragraph 1 “This specification describes a computer-implemented neural network system configured to plan a trajectory for a vehicle.”).
Browning in view of Doron in view of Ogale fail to explicitly disclose determine a navigational action for the host vehicle based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle
Shashua, however, teaches determining a navigational action based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle (Multiple images are used to determine positions of moving vehicle, the positions are then used to determine a navigational action for the vehicle: See at least Shashua Paragraph 6).
It would have been obvious to one of ordinary skill in the art to have modified the method of Browning in view of Doron in view of Ogale to include determining a navigational action based on a combination of the 20first estimated position of the host vehicle and the second estimated position of the host vehicle, as taught by Shashua as disclosed above, in order to provide optimal navigational instructions by ensuring an accurate position of the vehicle (Shashua Paragraph 3 “Additionally, this disclosure relates to systems and methods for constructing, using, and updating the sparse map for autonomous vehicle navigation”).
With respect to claim 36, Browning in view of Doron in view of Ogale in view of Shashua teach the identifier includes at least one of a number of the at least one navigational map segment or a name of the at least one navigational map segment (See at least Doron Paragraph 183)
With respect to claim 37, Browning in view of Doron in view of Ogale in view of Shashua teach that each of the plurality of navigational map segments is associated with a unique index, and wherein the identifier includes at least one index of the at least one navigational map segment (See at least Doron Paragraph 183).
Claims 6-9, 11, 17-20, 38, and 40-41 and 43 are rejected under 35 U.S.C. 103 as being unpatentable over Browning (US 20180005050 A1) (“Browning”) in view of Doron (WO 2018175441 A1) (“Doron”) (Attached) in view of Ogale (US 20200174490 A1) (“Ogale”) in view of Shashua (US 20170010106 A1) (“Shashua”) further in view of Hoffmann (US 20200364883 A1) (“Hoffmann”).
With respect to claim 6, and similarly claims 38 and 41, Browning in view of Doron in view of Ogale in view of Shashua fail to explicitly disclose wherein determining the navigational action for the host vehicle 5includes applying a first weight value to the first estimated position and applying a second weight value to the second estimated position.
Hoffmann, however, teaches wherein determining the navigational action for the host vehicle 5includes applying a first weight value to the first estimated position and applying a second weight value to the second estimated position (Applying weights to estimated positions of vehicle: See at least Hoffmann Paragraph 59).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning in view of Doron in view of Ogale in view of Shashua to include applying a first weight value to the first estimated position and applying a second weight value to the second estimated position when determining the navigational action for the host vehicle, as taught by Hoffmann as disclosed above, in order to ensure the optimal traversal of a vehicle (Hoffmann Paragraph 5 “It is therefore an object of the present invention to propose a reliable and accurate method for determining the position of a mobile unit, which can also be carried out with low computational effort.”).
With respect to claim 7, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the first weight value and the second weight value are equal (Equal weights are used: See at least Hoffmann Paragraph 24 “A weight can initially be estimated or determined for the at least one hypothesis. If multiple hypotheses are determined, a weight is preferably initially estimated or determined for each hypothesis. Preferably, the weight of each hypothesis is initially the same.”)
With respect to claim 8, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the first weight value is less than the second weight 10value (Weights are different: See at least Hoffmann Paragraph 60)
With respect to claim 9, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the first weight value and the second weight value are determined based on an environment in which the host vehicle is located (Weights are based on the environment around the vehicle: See at least Hoffmann Paragraph 57)
With respect to claim 11, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the host vehicle is located in an urban environment (Vehicle is located in the an urban setting: See at least Browning Paragraph 94), and the second estimated position is weighted more than the first estimated position (Weights are different: See at least Hoffmann Paragraph 60).
With respect to claim 17, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the first weight value and the second weight value are based on a level of light in the environment of the host vehicle represented in the at least one image (Weights are based on light in environment: See at least Shashua Paragraph 92).
With respect to claim 18, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the level of light is above a predetermined threshold (Light is above a threshold: See at least Shashua Paragraph 124), 5and the first estimated position is weighted more than the second estimated position (Weights are different: See at least Hoffmann Paragraph 60).
With respect to claim 19, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the level of light is below a predetermined threshold (Low light conditions: See at least Shashua Paragraph 124), and the second estimated position is weighted more than the first estimated position (Weights are different: See at least Hoffmann Paragraph 60).
With respect to claim 20, and similarly claims 40 and 43, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann teach wherein the at least one image includes a representation of an at least partially obscured object (Image includes obscured objects: See at least Shashua Paragraph 91), and the second estimated position is weighted more than the first 10estimated position (Weights are different: See at least Hoffmann Paragraph 60).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Browning (US 20180005050 A1) (“Browning”) in view of Doron (WO 2018175441 A1) (“Doron”) (Attached) in view of Ogale (US 20200174490 A1) (“Ogale”) in view of Shashua (US 20170010106 A1) (“Shashua”) in view of Hoffmann (US 20200364883 A1) (“Hoffmann”) further in view of Breed (US 20120209505 A1) (“Breed”).
With respect to claim 10, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffman teach that the first estimated position is weighted more than the second estimated position (Weights are different: See at least Hoffmann Paragraph 60).
Browning in view of Doron in view of Shashua in view of Hoffman, however, fail to explicitly disclose that the vehicle is located in a rural environment.
Breed teaches that the vehicle is located in a rural environment (Vehicle is traveling in a rural environment: See at least Breed Paragraph 337 | Paragraph 82).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann to include that the vehicle drives in a rural environment, as taught by Breed as disclosed above, in order to effectively control the speed of a vehicle dependent on the environment that it is in (Breed Abstract “Method and arrangement for setting a speed limit for vehicle travelling on a road includes monitoring conditions of the road”).
Claim 12 rejected under 35 U.S.C. 103 as being unpatentable over Browning (US 20180005050 A1) (“Browning”) in view of Doron (WO 2018175441 A1) (“Doron”) (Attached) in view of Ogale (US 20200174490 A1) (“Ogale”) in view of Shashua (US 20170010106 A1) (“Shashua”) in view of Hoffmann (US 20200364883 A1) (“Hoffmann”) further in view of LIU (US 20180293756 A1) (“LIU”).
With respect to claim 12, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann fail to explicitly disclose wherein the first weight value and the second weight value are based on a density of objects represented in the at least one image.
LIU, however, teaches that the first weight value and the second weight value are based on a density of objects represented in the at least one image (Weight is based on amount of objects in an image: See at least LIU Paragraph 75 | Paragraph 98).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann to include that the first weight value and the second weight value are based on a density of objects represented in the at least one image, as taught by LIU as disclosed above , in order to ensure accurate determination of a location based on reference landmarks (LIU Paragraph 1 “The present disclosure relates to the field of computing, in particular to, enhanced localization of a computing device.”).
Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Browning (US 20180005050 A1) (“Browning”) in view of Doron (WO 2018175441 A1) (“Doron”) (Attached) in view of Ogale (US 20200174490 A1) (“Ogale”) in view of Shashua (US 20170010106 A1) (“Shashua”) in view of Hoffmann (US 20200364883 A1) (“Hoffmann”) in view of LIU (US 20180293756 A1) (“LIU”) further in view of Halder (US 20170247036 A1) (“Halder”).
With respect to claim 13, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann in view of LIU teach that the first estimated position is weighted more than the second estimated position (Weights are different: See at least Hoffmann Paragraph 60).
Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann in view of LIU fail to explicitly disclose that the density of the objects represented in the at least one 20image is below a predetermined density threshold.
Halder, however, teaches that the density of the objects represented in the at least one 20image is below a predetermined density threshold (Amount of objects is smaller than a size threshold: See at least Halder Paragraph 35).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann in view of LIU to include that the density of the objects represented in the at least one 20image is below a predetermined density threshold, as taught by Halder as disclosed above, in order to effectively detect a vehicle’s position through an optimal number of landmarks (Halder Paragraph 2 “This relates generally to sensing grid-based sensing of a vehicle's surroundings, and more particularly, to such sensing using a sensing grid having dynamically variable sensing cell size.”).
With respect to claim 14, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann in view of LIU teach that the first estimated position is weighted more than the second estimated position (Weights are different: See at least Hoffmann Paragraph 60).
Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann in view of LIU fail to explicitly disclose that the density of the objects represented in the at least one image equals or exceeds a predetermined density threshold.
Halder, however, teaches that the density of the objects represented in the at least one image equals or exceeds a predetermined density threshold (Amount of objects in an image is greater than a size threshold: See at least Halder Paragraph 35).
It would have been obvious to one of ordinary skill in the art to have modified the system of Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann in view of LIU to include that the density of the objects represented in the at least one image equals or exceeds a predetermined density threshold, as taught by Halder as disclosed above, in order to effectively detect a vehicle’s position through an optimal number of landmarks (Halder Paragraph 2 “This relates generally to sensing grid-based sensing of a vehicle's surroundings, and more particularly, to such sensing using a sensing grid having dynamically variable sensing cell size.”).
Claims 15-16, 39, and 42 are rejected under 35 U.S.C. 103 as being unpatentable over Browning (US 20180005050 A1) (“Browning”) in view of Doron (WO 2018175441 A1) (“Doron”) (Attached) in view of Ogale (US 20200174490 A1) (“Ogale”) in view of Shashua (US 20170010106 A1) (“Shashua”) in view of Hoffmann (US 20200364883 A1) (“Hoffmann”) further in view of Shalef-Schwartz (CN 108431549 B) (“Schalef-Schwartz”) (Translation attached).
With respect to claim 15, and similarly claims 39 and 42, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffman teach that the second estimated position is weighted more than the first estimated position (Weights are different: See at least Hoffmann Paragraph 60).
Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffman fail to explicitly disclose that the first weight value and the second weight value are based on an adverse weather condition represented in the at least one image.
Schalef-Schwartz, however, teaches that the first weight value and the second weight value are based on an adverse weather condition represented in the at least one image (Weight is based on bad weather conditions in an image: See at least Schalef-Schwartz Paragraph 156 | Paragraph 296 | Paragraph 356).
It would have been obvious to one of ordinary skill in the art to have combined the system of Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann to include that the first weight value and the second weight value are based on an adverse weather condition represented in the at least one image, as taught by Schalef-Schwartz as disclosed above, in order to ensure safety of the vehicle when traversing through unsafe environmental conditions (Schalef-Schwartz Paragraph 6 “Autonomous vehicles may need to consider a wide variety of factors and make appropriate decisions based on those factors to safely and accurately arrive at a desired destination”).
With respect to claim 16, Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann in view of Schalef-Schwartz teach that the adverse weather condition includes rain, snow, or fog (Adverse weather is snow or rain: See at least Schalef-Schwartz Paragraph 296).
Claims 27 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Browning (US 20180005050 A1) (“Browning”) in view of Doron (WO 2018175441 A1) (“Doron”) (Attached) in view of Ogale (US 20200174490 A1) (“Ogale”) in view of Shashua (US 20170010106 A1) (“Shashua”) further in view of Schalef-Schwartz (CN 108431549 B) (“Schalef-Schwartz”) (Translation attached).
With respect to claim 27, Browning in view of Doron in view of Ogale in view of Shashua fail to explicitly disclose that wherein analyzing the at least one image to detect the presence of one or more objects represented in the at least one image includes receiving, from the image capture device, a second image representative of the environment of the host vehicle; and 102Attorney Docket No.: 12312.0206 detecting the one or more objects based on detected motion of the one or more objects represented by at least one image location change of the one or more objects between the at least one image and the second image.
Schalef-Schwartz, however, teaches that when analyzing the at least one image to detect the presence of one or more objects represented in the at least one image includes receiving, from the image capture device, a second image representative of the environment of the host vehicle; and 102Attorney Docket No.: 12312.0206 detecting the one or more objects based on detected motion of the one or more objects represented by at least one image location change of the one or more objects between the at least one image and the second image (Detecting movement of objects between images in order to determine vehicle location: See at least Schalef-Schwartz Paragraph 153).
It would have been obvious to one of ordinary skill in the art to have combined the system of Browning in view of Doron in view of Ogale in view of Shashua in view of Hoffmann so that when analyzing the at least one image to detect the presence of one or more objects represented in the at least one image includes receiving, from the image capture device, a second image representative of the environment of the host vehicle; and 102Attorney Docket No.: 12312.0206 detecting the one or more objects based on detected motion of the one or more objects represented by at least one image location change of the one or more objects between the at least one image and the second image, as taught by Schalef-Schwartz as disclosed above, in order to ensure safety of the vehicle when traversing through environments with objects (Schalef-Schwartz Paragraph 6 “Autonomous vehicles may need to consider a wide variety of factors and make appropriate decisions based on those factors to safely and accurately arrive at a desired destination”).
With respect to claim 30, Browning in view of Doron in view of Ogale in view of Shashua fail to explicitly disclose that the trained system is configured at least based on a reward function.
Schalef-Schwartz, however, teaches that the trained system is configured at least based on a reward function (System is based on reward: See at least Schalef-Schwartz Paragraph 186).
It would have been obvious to one of ordinary skill in the art to have combined the system of Browning in view of Doron in view of Ogale in view of Shashua so that that the trained system is configured at least based on a reward function, as taught by Schalef-Schwartz as disclosed above, in order to reinforce the trained system (Schalef-Schwartz Paragraph 6 “Autonomous vehicles may need to consider a wide variety of factors and make appropriate decisions based on those factors to safely and accurately arrive at a desired destination”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM ABDOALATIF ALSOMAIRY whose telephone number is (571)272-5653. The examiner can normally be reached M-F 7:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IBRAHIM ABDOALATIF ALSOMAIRY/ Examiner, Art Unit 3667 /KENNETH J MALKOWSKI/Primary Examiner, Art Unit 3667