DETAILED ACTION
This is a response to applicant’s submissions filed on 1/14/2026. Claims 1, 3-10, 12-15 and 17-23 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/14/2026 has been entered.
Response to Arguments
Applicant's arguments filed 1/14/2026 have been fully considered but they are not persuasive.
In response to Applicant’s argument that Kwant does not teach or suggest that one of the determined points is on the lane lines (Applicant’s Remarks; p. 15), it is noted that the features upon which applicant relies (i.e., determined points on lane lines) are not recited in the rejected claim(s). The claims are directed to points/vertices located on or associated with a generically recited predicted path, and do not recite how/when they are determined. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). It is further unclear what the predicted path represents in the claims. See rejections below.
In response to Applicant’s argument that Kwant does not teach or suggest that one of the determined points is actually occluded within the image because Kwant describes that all of the determined points are not occluded by objects within the images, but associated with visible portions of the lane lines (Applicant’s Remarks; p. 15), the Examiner respectfully disagrees. Kwant, in paragraph 61, discloses when the system encounters a puddle obscuring the underlying lane line, as it progressively extends the contour of the detected lane line forward, the polyline can be constructed by tracing the contour through the occlusion to find a next point along the corresponding lane line. Although the Applicant’s specification, in paragraph 120, discloses drivable paths may be annotated through vehicles and other objects occluding the lane markings, and in paragraph 122, the annotation may extend along a vehicle, there does not appear to be explicit disclosure that the annotations comprise the occluded points on the predicted path, however, a line, by definition, is an infinite, continuous set of points. Kwant, in figure 8, clearly discloses points on the determined polyline that correspond with obscured points of the lane line. Kwant’s “tracing through” further appears analogous to the Applicant’s extending. See rejection below.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 22 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 22, lines 8-9, the limitation “determining … the geometry of the predicted path is further based at least on the classifications” appears to be new matter because classifying the predicted path appears to occur after its geometry is determined. Paragraph 103-104 disclose the machine learning model computes the path geometries and then assigns a class to the predicted path. Paragraphs 109-110 disclose generating a geometry for the predicted path and then determining that the predicted path corresponds to a path class. Paragraph 114 discloses the annotated paths included in the ground truth data may include path class information in addition to the path geometry, however, paragraph 114 does not appear to disclose or imply the path geometry is based on the class.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3-10, 12-15 and 17-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1 and 9, lines 5-6 and 2, respectively, the limitation “[the/an] image depicting a predicted path” renders the claim indefinite because it is unclear how the image data depicts a predicted path when the image data is used to predict the path. Paragraph 31 discloses one or more deep neural networks use received image data to generate one or more predicted paths. Paragraph 43 further discloses when the sensor data is image data, the path geometries are computed by the machine learning models for a current or most recent image and weighted against the path geometries computed by the machine learning models for one or more previous images. Paragraphs 98-99 disclose the machine learning models compute the path edges and path rail and may overlay them on an image to form a visualization. Although paragraph 99 discloses the path edges and path may be used to perform path planning, mapping, lane-keeping, lane-changing, etc., there does not appear to be explicit disclosure of using the visualization or the overlaid image in any subsequent operations. Paragraph 91 discloses the machine learning models are trained to output path geometries that include a predetermined number of predicted vertices. Paragraph 35 discloses, in embodiments where the sensor data includes image data, the image may include data representative of images of a field of view of one or more cameras of the a vehicle, however, paragraph 35 does not disclose if or how the image includes the predicted path. Paragraph 38 discloses the raw image data may be pre-processed using a pre-processing image pipeline that may include decompanding, noise reduction, demosaicing, white balancing, histogram computing, and/or adaptive global tone mapping, however, the pre-processing does not appear to include adding the predicted path(s) to the image or image data. Paragraph 118 and figure 9 disclose separate images and ground truth data are used during training, and the ground truth data may include annotations and path labels such as a center and edges of a drivable path. Paragraph 120 again discloses the machine learning models are trained to predict the path geometries of drivable paths. It is further unclear if the dashed lines depicted in sensor data 102A and 102B of figures 1B and 1C, respectively, are physical lane markings, which is suggested by their occlusion due to the presence of another vehicle and the horizon, or outputs of edge to rail calculations that are depicted by dashed lines (see para. 51). For the purposes of examination, it will be assumed that the image generally depicts a predicted path because it depicts the area in the vehicle’s direction of travel.
Regarding claim 1, lines 10-11, the limitation “points located on the predicted path, at least a portion of the points being occluded” renders the claim indefinite because it is unclear whether the points are physical road features that are captured in the image and may be hidden from view, or points comprising the geometry of a computed predicted path, therefore, it is unclear how the points are occluded. Paragraphs 120 and 122 disclose the machine learning models may be trained to predict path geometries where lane markings are occluded by vehicle or other objects. Paragraph 124 further discloses the path edge annotations used as ground truth data, with respect to images, may extend along a vehicle or other object where lane markings may be occluded. For the purposes of examination, it will be assumed that the points are positions on lane markings that may be occluded.
Regarding claim 9, lines 11-12, the limitation “at least a portion of the one or more two-dimensional points are associated with the portion of the predicted path that is occluded” renders the claim indefinite because it is unclear whether the points are physical road features that are captured in the image and may be hidden from view, or points comprising the geometry of a computed predicted path, therefore, it is unclear how the points are occluded. Paragraphs 120 and 122 disclose the machine learning models may be trained to predict path geometries where lane markings are occluded by vehicle or other objects. Paragraph 124 further discloses the path edge annotations used as ground truth data, with respect to images, may extend along a vehicle or other object where lane markings may be occluded. For the purposes of examination, it will be assumed that the points are positions on lane markings that may be occluded.
Regarding claim 18, lines 8-9, the limitation “at least a portion of the one or more vertices associated with the predicted path are occluded” renders the claim indefinite because it is unclear whether the points are physical road features that are captured in the image and may be hidden from view, or points comprising the geometry of a computed predicted path, therefore, it is unclear how the vertices are occluded. Paragraphs 120 and 122 disclose the machine learning models may be trained to predict path geometries where lane markings are occluded by vehicle or other objects. Paragraph 124 further discloses the path edge annotations used as ground truth data, with respect to images, may extend along a vehicle or other object where lane markings may be occluded. For the purposes of examination, it will be assumed that the vertices are positions on lane markings that may be occluded.
Regarding claim 22, lines 8-9, the limitation “the predicted path is further based at least on the classifications” renders the claim indefinite because, as discussed above, it appears that the paths are classified after determining their geometry, therefore, it is unclear how their geometry can also be determined based on the classifications. Paragraph 64 discloses using the class labels to follow a trajectory or path, and paragraph 111 further discloses using the geometry for the path class to control the vehicle. For the purposes of examination, it will be assumed that it is the navigation of the machine that is based on the classifications.
Claims 3-8, 10, 12-15, 17 and 19-23 are rejected as being dependent on a rejected claim and for failing to cure the deficiencies listed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-10, 12-13, 15 and 17-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kwant (US 2019/0035101) in view of Shashua et al. (US 2017/0010618), hereinafter Shashua.
Regarding claim 1, as best understood, Kwant discloses a method comprising: obtaining image data representative of an image, the image depicting a predicted path that is at least partially occluded by one or more objects (Kwant; para. 61: As each image or frame of the video is captured, the computer vision system 103 processes the image using the RNN 105 to iteratively construct respective polyline representations 803a-803e of the lane lines depicted in each image or frame … the computer vision system 103 can trace through potential occlusions of the object contours or lane lines in the input image. In the example of FIG. 8, the polyline 803d can be constructed by tracing through the occlusion 805 (e.g., a puddle obscuring the underlying lane line) to find a next point along the corresponding lane line … Other common potential occlusions includes include cars or other vehicles in the input image that occlude a road feature or object of interest (e.g., a lane line).); generating, by one or more neural networks and based at least on the image data, output data representative of distance values indicating distances between one or more anchor points and points located on the predicted path (Kwant; para. 51: the inputs for the RNN 105 include, but are not limited to the following: (1) a precise location of the current instance of the RNN 105 in either absolute coordinates or coordinates relative to the current cell … Accordingly, with respect to the first input, in step 205, the computer vision system 103 determines this precise location with respect to the current cell in which the cursor of the RNN 105 is located. If the current grid cell is the start of a lane line, the computer vision system 103 (e.g., via the respective instance of the RNN 105) determines the precise location of a start location (e.g., any of start locations 701a-701c) within the selected grid cell (e.g., the current grid cell)), at least a portion of the points being occluded within the image by the one or more objects (Kwant; para. 61: polyline 803d can be constructed by tracing through the occlusion); determining, based at least on the one or more anchor points and the distance values, locations within the image of the points located on the predicted path; determining, based at least on the locations within the image of the points, at least a portion of a geometry of the predicted path (Kwant; para. 61: the computer vision system 103 constructs the corresponding polylines 803a-803e from the starting point of each lane line in the frame progressing upwards in the frame in real-time or near-real time); and causing, using one or more control operations of a machine and based at least on the at least the portion of the geometry, the machine to navigate (Kwant; para. 67: the application 121 may also be any type of application that is executable on the UE 117 and/or vehicle 101, such as autonomous driving applications).
Kwant does not explicitly disclose causing the machine to navigate from a first location along the predicted path to a second location along the predicted path.
Shashua, in the same field of endeavor (vehicle navigation), discloses causing a machine to navigate from a first location along a predicted path to a second location along the predicted path (Shashua; para. 349: FIG. 5E is a flowchart showing an exemplary process 500E for causing one or more navigational responses in vehicle 200 based on a vehicle path, consistent with the disclosed embodiments. At step 570, processing unit 110 may construct an initial vehicle path associated with vehicle 200. The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance d.sub.i between two points in the set of points may fall in the range of 1 to 5 meters. In one embodiment, processing unit 110 may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane)).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the autonomous driving application of Kwant to drive the vehicle along a path between lane boundaries, as disclosed by Shashua, to yield the predictable result of accurately driving the vehicle on a road.
Regarding claim 4, as best understood, Kwant, as modified, discloses the points comprise vertices of one or more path edges associated with the predicted path; and the determining the at least the portion of the geometry of the predicted path comprises determining the one or more path edges based at least on the locations of the points (Kwant; para. 61: the computer vision system 103 constructs the corresponding polylines 803a-803e from the starting point of each lane line in the frame progressing upwards in the frame in real-time or near-real time) and determining the predicted path based at least on the one or more path edges (Shashua; para. 349: The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance d.sub.i between two points in the set of points may fall in the range of 1 to 5 meters. In one embodiment, processing unit 110 may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane)).
Regarding claim 5, as best understood, Kwant, as modified, discloses the points comprise vertices of the predicted path (Shashua; para. 349: FIG. 5E is a flowchart showing an exemplary process 500E for causing one or more navigational responses in vehicle 200 based on a vehicle path, consistent with the disclosed embodiments. At step 570, processing unit 110 may construct an initial vehicle path associated with vehicle 200. The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance d.sub.i between two points in the set of points may fall in the range of 1 to 5 meters.); and the determining the at least the portion of the geometry of the predicted path comprises determining, based at least on the locations of the vertices, the at least the portion of the geometry of the predicted path as including a line through the vertices (Shashua; para. 349: In one embodiment, processing unit 110 may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane); para. 350: Processing unit 110 may reconstruct the vehicle path using a parabolic spline algorithm).
Regarding claims 6 and 19, as best understood, Kwant discloses the locations of the points correspond to two-dimensional locations of vertices within the image (Kwant; para. 51: the inputs for the RNN 105 include, but are not limited to the following: (1) a precise location of the current instance of the RNN 105 in either absolute coordinates or coordinates relative to the current cell … Accordingly, with respect to the first input, in step 205, the computer vision system 103 determines this precise location with respect to the current cell in which the cursor of the RNN 105 is located. If the current grid cell is the start of a lane line, the computer vision system 103 (e.g., via the respective instance of the RNN 105) determines the precise location of a start location (e.g., any of start locations 701a-701c) within the selected grid cell (e.g., the current grid cell)); and the determining the at least the portion of the geometry of the predicted path comprises determining, based at least on the two-dimensional locations of the vertices, three-dimensional locations within an environment (Shashua; para. 386: these polynomials may represent curves extending in three dimensions (e.g., including a height component) to represent elevation changes in a road segment in addition to X-Y curvature), and determining, based at least on the three-dimensional locations, the at least the portion of the geometry of the predicted path (Shashua; para. 387: each target trajectory may correspond to a spline connecting three-dimensional polynomial segments).
Regarding claim 7, as best understood, Kwant, as modified, discloses the locations of the points correspond to pixel locations within the image (Kwant; para. 49: the convolutional layers will result in the feature map 601 that can be used to predict lane lines. In one embodiment, a subset of the channels of the feature map encode at least the start locations of each of lane line detected in the input image. For example, the CNN 107 can be trained to recognize or classify pixels that correspond to the start of an object contour or lane line in input images, and then encode the detected start locations and/or characteristics of the start locations into the subset of channels); and the determining the at least the portion of the geometry of the predicted path comprises determining, based at least on the pixel locations, three-dimensional locations within an environment (Shashua; para. 386: these polynomials may represent curves extending in three dimensions (e.g., including a height component) to represent elevation changes in a road segment in addition to X-Y curvature), and determining, based at least on the one or more three-dimensional locations, the at least the portion of the geometry of the predicted path (Shashua; para. 387: each target trajectory may correspond to a spline connecting three-dimensional polynomial segments).
Regarding claim 8, as best understood, Kwant, as modified, discloses generating, based at least on at least one of the predicted path (Kwant; para. 43: Each frame of the image stream can then be processed to provide real-time detection of lane-lines or other object contours using the various embodiments described herein to output lane or object contour coordinates or models in the form of polylines or equivalent representation), one or more locations of the one or more objects, or one or more wait conditions, a model associated with an environment for which the machine is navigating (Kwant; para. 44: lane models (and similarly models of other object contours) are typically represented as sets of polylines 401a-401c, in which the centerlines of the respective lanes 303a-303c are represented by piecewise-linear functions with an arbitrary number of points), wherein the causing the machine to navigate from the first location along the predicted path to the second location along the predicted path is based at least on the model (Shashua; para. 349: FIG. 5E is a flowchart showing an exemplary process 500E for causing one or more navigational responses in vehicle 200 based on a vehicle path, consistent with the disclosed embodiments. At step 570, processing unit 110 may construct an initial vehicle path associated with vehicle 200. The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance d.sub.i between two points in the set of points may fall in the range of 1 to 5 meters. In one embodiment, processing unit 110 may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane)).
Regarding claim 9, as best understood, Kwant discloses a system comprising one or more processors (Kwant; fig. 10: processor 1002) configured to: obtain image data representative of an image depicting a predicted path, a portion of the predicted path occluded by one or more objects within the image (Kwant; para. 61: As each image or frame of the video is captured, the computer vision system 103 processes the image using the RNN 105 to iteratively construct respective polyline representations 803a-803e of the lane lines depicted in each image or frame … the computer vision system 103 can trace through potential occlusions of the object contours or lane lines in the input image. In the example of FIG. 8, the polyline 803d can be constructed by tracing through the occlusion 805 (e.g., a puddle obscuring the underlying lane line) to find a next point along the corresponding lane line … Other common potential occlusions includes include cars or other vehicles in the input image that occlude a road feature or object of interest (e.g., a lane line).); determine, using one or more neural networks (Kwant; para. 28: a system capable of detecting object contours using a cursor recurrent neural network) and based at least on the image data representative the image, one or more distance values indicating one or more distances from one or more fixed locations of one or more anchor points within the image to one or more two-dimensional points within the image that are associated with a predicted path of a machine (Kwant; para. 51: the inputs for the RNN 105 include, but are not limited to the following: (1) a precise location of the current instance of the RNN 105 in either absolute coordinates or coordinates relative to the current cell … Accordingly, with respect to the first input, in step 205, the computer vision system 103 determines this precise location with respect to the current cell in which the cursor of the RNN 105 is located. If the current grid cell is the start of a lane line, the computer vision system 103 (e.g., via the respective instance of the RNN 105) determines the precise location of a start location (e.g., any of start locations 701a-701c) within the selected grid cell (e.g., the current grid cell)), wherein at least a portion of the one or more two-dimensional points are associated with the portion of the predicted path that is occluded by the one or more objects within the image (Kwant; para. 61: polyline 803d can be constructed by tracing through the occlusion); determine, based at least on the one or more fixed locations and the one or more distance values indicating the one or more distances, one or more two-dimensional locations of the one or more two-dimensional points within the one or more sensor data representations (Kwant; para. 61: the computer vision system 103 constructs the corresponding polylines 803a-803e from the starting point of each lane line in the frame progressing upwards in the frame in real-time or near-real time); determine, based at least on the one or more two-dimensional location of the one or more two-dimensional points, one or more three-dimensional locations of one or more three-dimensional points within an environment that are associated with the predicted path of the machine (Kwant; para. 43: Each frame of the image stream can then be processed to provide real-time detection of lane-lines or other object contours using the various embodiments described herein to output lane or object contour coordinates or models in the form of polylines or equivalent representation); and cause, using one or more control system of the machine and based at least on the one or more three-dimensional points within the environment, the machine to navigate (Kwant; para. 67: the application 121 may also be any type of application that is executable on the UE 117 and/or vehicle 101, such as autonomous driving applications).
Kwant does not explicitly disclose causing the machine to navigate from a first location along the predicted path to a second location along the predicted path.
Shashua discloses causing a machine to navigate from a first location along a predicted path to a second location along the predicted path (Shashua; para. 349: FIG. 5E is a flowchart showing an exemplary process 500E for causing one or more navigational responses in vehicle 200 based on a vehicle path, consistent with the disclosed embodiments. At step 570, processing unit 110 may construct an initial vehicle path associated with vehicle 200. The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance d.sub.i between two points in the set of points may fall in the range of 1 to 5 meters. In one embodiment, processing unit 110 may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane)).
Kwant clearly discloses the determined two-dimensional points represent three-dimensional points within the environment (Kwant; fig. 4). Shashua also discloses determining, based at least on one or more two-dimensional points, one or more three-dimensional points within an environment that are associated with the predicted path of the machine (Shashua; para. 386: these polynomials may represent curves extending in three dimensions (e.g., including a height component) to represent elevation changes in a road segment in addition to X-Y curvature).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the autonomous driving application of Kwant to drive the vehicle along a three dimensional path between lane boundaries, as disclosed by Shashua, to yield the predictable result of accurately driving the vehicle on a road.
Regarding claim 10, as best understood, Kwant, as modified, discloses the one or more two-dimensional locations of one or more two-dimensional points that are associated with the predicted path of the machine comprise one or more pixel locations within the image that are associated with the predicted path (Kwant; para. 49: the convolutional layers will result in the feature map 601 that can be used to predict lane lines. In one embodiment, a subset of the channels of the feature map encode at least the start locations of each of lane line detected in the input image. For example, the CNN 107 can be trained to recognize or classify pixels that correspond to the start of an object contour or lane line in input images, and then encode the detected start locations and/or characteristics of the start locations into the subset of channels; para. 61: the computer vision system 103 constructs the corresponding polylines 803a-803e from the starting point of each lane line in the frame progressing upwards in the frame in real-time or near-real time); and the determination of the one or more three-dimensional locations of the one or more three-dimensional points within the environment that are associated with the predicted path of the machine is based at least on the one or more pixel locations (Shashua; para. 386: these polynomials may represent curves extending in three dimensions (e.g., including a height component) to represent elevation changes in a road segment in addition to X-Y curvature).
Regarding claim 12, as best understood, Kwant, as modified, discloses the one or more two-dimensional points comprise at least one of: one or more first two-dimensional points that are associated with one or more path edges associated with the predicted path (Kwant; para. 61: the computer vision system 103 constructs the corresponding polylines 803a-803e from the starting point of each lane line in the frame progressing upwards in the frame in real-time or near-real time); or one or more second two-dimensional points that are associated with a center of the predicted path.
Regarding claim 13, as best understood, Kwant, as modified, discloses the one or more three-dimensional points within the environment comprise at least one of: one or more first three-dimensional points that are associated with one or more path edges within the environment (Kwant; para. 61: the computer vision system 103 constructs the corresponding polylines 803a-803e from the starting point of each lane line in the frame progressing upwards in the frame in real-time or near-real time); or one or more second three-dimensional points that are associated with a center of the predicted path through the environment (Shashua; para. 386: these polynomials may represent curves extending in three dimensions (e.g., including a height component) to represent elevation changes in a road segment in addition to X-Y curvature; para. 349: Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane)).
Regarding claim 15, as best understood, Kwant, as modified, discloses the one or more processors are further configured to: determine, based at least on the one or more three-dimensional locations of the one or more three-dimensional points within the environment, at least a portion of a geometry of the predicted path (Shashua; para. 386: these polynomials may represent curves extending in three dimensions (e.g., including a height component) to represent elevation changes in a road segment in addition to X-Y curvature), wherein the machine is caused to navigate from the first location along the predicted path to the second location along the predicted path based at least on the at least the portion of the geometry of the predicted path (Shashua; para. 349: FIG. 5E is a flowchart showing an exemplary process 500E for causing one or more navigational responses in vehicle 200 based on a vehicle path, consistent with the disclosed embodiments. At step 570, processing unit 110 may construct an initial vehicle path associated with vehicle 200. The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance d.sub.i between two points in the set of points may fall in the range of 1 to 5 meters. In one embodiment, processing unit 110 may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane)).
Regarding claims 17 and 20, as best understood, Kwant, as modified, discloses the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system implemented using a robot; a system for generating synthetic data (Kwant; para. 67: vehicle 101 may execute the software application 121 to detect lane lines in image data using a cursor recurrent neural network or equivalent according the embodiments described herein. By way of example, the application 121 may also be any type of application that is executable on the UE 117 and/or vehicle 101, such as autonomous driving applications, mapping applications, location-based service applications, navigation applications, content provisioning services, camera/imaging application, media player applications, social networking applications, calendar applications, and the like); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources.
Regarding claim 18, as best understood, Kwant discloses a processor comprising processing circuitry (Kwant; fig. 10: processor 1002) to: generate, by one or more neural networks and based at least on sensor data representative of a sensor representation, output data representative of one or more distance values indicating one or more distances between one or more fixed locations of one or more anchor points and one or more vertices associated with a predicted path as represented by the sensor representations (Kwant; para. 45: a grid 501 segments the input image (e.g., the image 301 as shown in FIG. 3) into individual grid cells [the fixed locations are at the intersections of the grid cells]; para. 51: the inputs for the RNN 105 include, but are not limited to the following: (1) a precise location of the current instance of the RNN 105 in either absolute coordinates or coordinates relative to the current cell … Accordingly, with respect to the first input, in step 205, the computer vision system 103 determines this precise location with respect to the current cell in which the cursor of the RNN 105 is located. If the current grid cell is the start of a lane line, the computer vision system 103 (e.g., via the respective instance of the RNN 105) determines the precise location of a start location (e.g., any of start locations 701a-701c) within the selected grid cell (e.g., the current grid cell)), wherein at least a portion of the one or more vertices associated with the predicted path are occluded by one or more objects represented by the sensor representation (Kwant; para. 61: As each image or frame of the video is captured, the computer vision system 103 processes the image using the RNN 105 to iteratively construct respective polyline representations 803a-803e of the lane lines depicted in each image or frame … the computer vision system 103 can trace through potential occlusions of the object contours or lane lines in the input image. In the example of FIG. 8, the polyline 803d can be constructed by tracing through the occlusion 805 (e.g., a puddle obscuring the underlying lane line) to find a next point along the corresponding lane line … Other common potential occlusions includes include cars or other vehicles in the input image that occlude a road feature or object of interest (e.g., a lane line).); determine, based at least on the one or more fixed locations of the one or more anchor points and the one or more distance values indicating the one or more distances, one or more locations of the one or more vertices associated with the predicted path; determine, based at least on the one or more vertices associated with the predicted path, at least a portion of a geometry of the predicted path (Kwant; para. 61: the computer vision system 103 constructs the corresponding polylines 803a-803e from the starting point of each lane line in the frame progressing upwards in the frame in real-time or near-real time); and cause, using one or more control systems of a machine and based at least on the at least the portion of the geometry, the machine to navigate (Kwant; para. 67: the application 121 may also be any type of application that is executable on the UE 117 and/or vehicle 101, such as autonomous driving applications).
Kwant does not explicitly disclose causing the machine to navigate from a first location along the predicted path to a second location along the predicted path.
Shashua, in the same field of endeavor (vehicle navigation), discloses causing a machine to navigate from a first location along a predicted path to a second location along the predicted path (Shashua; para. 349: FIG. 5E is a flowchart showing an exemplary process 500E for causing one or more navigational responses in vehicle 200 based on a vehicle path, consistent with the disclosed embodiments. At step 570, processing unit 110 may construct an initial vehicle path associated with vehicle 200. The vehicle path may be represented using a set of points expressed in coordinates (x, z), and the distance d.sub.i between two points in the set of points may fall in the range of 1 to 5 meters. In one embodiment, processing unit 110 may construct the initial vehicle path using two polynomials, such as left and right road polynomials. Processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., a smart lane offset), if any (an offset of zero may correspond to travel in the middle of a lane)).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the autonomous driving application of Kwant to drive the vehicle along a path between lane boundaries, as disclosed by Shashua, to yield the predictable result of accurately driving the vehicle on a road.
Regarding claim 21, as best understood, Kwant, as modified, discloses the distance values include at least one or more first distance values associated with a first coordinate direction and one or more second distance values associated with a second coordinate direction (Kwant; para. 51: the inputs for the RNN 105 include, but are not limited to the following: (1) a precise location of the current instance of the RNN 105 in either absolute coordinates or coordinates relative to the current cell … Accordingly, with respect to the first input, in step 205, the computer vision system 103 determines this precise location with respect to the current cell in which the cursor of the RNN 105 is located. If the current grid cell is the start of a lane line, the computer vision system 103 (e.g., via the respective instance of the RNN 105) determines the precise location of a start location (e.g., any of start locations 701a-701c) within the selected grid cell (e.g., the current grid cell)). Although neither claim 21 nor Kwant explicitly disclose the first and second coordinate directions are different, it is known that a point is represented by an ordered pair (x, y) of numbers, where the first number conventionally represents the horizontal and is often denoted by x, and the second number conventionally represents the vertical and is often denoted by y, see Wikipedia article titled “Point (geometry)”. In addition, Shashua explicitly discloses, in paragraph 349, the vehicle path is represented using a set of points expressed in coordinates (x, z).
Regarding claim 22, as best understood, Kwant, as modified, discloses the output data further represents classifications associated with the points (Kwant; para. 49: CNN 107 can be trained to recognize or classify pixels that correspond to the start of an object contour or lane line in input images, and then encode the detected start locations and/or characteristics of the start locations into the subset of channels); and the navigating the machine is further based at least on the classifications (Kwant; para. 50: computer vision system 103 can instantiate an instance of the RNN 105 at each of the lane line start locations 701a-701c. In this way, the instances of the RNN 105 can process each lane line in parallel to improve processing speed and provide real-time or near real-time lane detection).
Regarding claim 23, as best understood, Kwant, as modified, discloses the one or more anchor points are associated with an anchor line associated with the image (Kwant; para. 45: a grid 501 segments the input image (e.g., the image 301 as shown in FIG. 3) into individual grid cells [i.e., each grid line comprises a plurality of grid intersection points]).
Claim(s) 3 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kwant in view of Shashua as applied to claims 1 and 9 above, and further in view of Nakagoshi et al. (US 2009/0048737), hereinafter Nakagoshi.
Regarding claims 3 and 14, as best understood, Kwant, as modified discloses the invention substantially as claimed as described above.
Kwant, as modified, does not explicitly disclose determining a path class associated with the predicted path, the path class including at least one of: a straight path; a right path; or a left path.
Nakagoshi, in the same field of endeavor (driving assistance systems), discloses determining a path class associated with the predicted path, the path class including at least one of: a straight path (Nakagoshi; para. 60: the types of traveling path are further minutely classified into "the straight path", "the entrance point and the exist point of a curved path", "the point of the greatest curvature", and "the sections between the entrance or exit point of a curved path and the point of the greatest curvature". Then, the awake state estimation device in accordance with the embodiment specifically determines the type of the traveling path on which the vehicle is presently running, of the foregoing types of traveling paths, on the basis of traveling information); a right path; or a left path.
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to classify the vehicle path into straight and curved sections, as disclosed by Nakagoshi, in the neural network of Kwant, with the motivation of individually correcting the corrective steering angle on the specifically determined type of the traveling path with a correction value that is calculated beforehand according to the type of the traveling path (Nakagoshi; para. 60) thereby improving accuracy of the driving system.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH THOMPSON whose telephone number is (571)272-3660. The examiner can normally be reached Mon-Thurs 9:00AM-3:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached on (571)270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH THOMPSON/Examiner, Art Unit 3665
/Erin D Bishop/Supervisory Patent Examiner, Art Unit 3665