Prosecution Insights
Last updated: April 19, 2026
Application No. 17/391,618

USING MAP INFORMATION TO SMOOTH OBJECTS GENERATED FROM SENSOR DATA

Non-Final OA §101§102§103§112
Filed
Aug 02, 2021
Examiner
NICKERSON, SAMANTHA K
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Waymo LLC
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
511 granted / 597 resolved
+33.6% vs TC avg
Strong +15% interview lift
Without
With
+15.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
8 currently pending
Career history
605
Total Applications
across all art units

Statute-Specific Performance

§101
2.0%
-38.0% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
23.4%
-16.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 597 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites the following limitations directed to an abstract idea: generating a representation of an object detected by a perception system of a vehicle (mental process: human being observing an environment and imagining, drawing, or describing an object); determining a first position for the representation of the object based on a location of a representative point selected from a plurality of locations of lane center points (mental process: human being observing an environment, seeing an object, and comparing the location of the object to the center of a lane, e.g. another vehicle and its position within a lane); determining a second position for the representation of an object based on a set of data points received from the perception system that correspond to the object (mental process: human being observing the environment, seeing the object, and comparing a new location of the object to the center of a lane, e.g. the other vehicle traveling within a lane); determining a third position for the representation of the object based on a distance between the first position and the second position (mental process: human being observing the environment, seeing the object, and comparing a new location of the object to the center of a lane, e.g. the human being estimating distance traveled of the other vehicle between a first observed position and a second observed position); generating the representation of the object using the third position (mental process: human being imagining, drawing, or describing the object based on observations within the environment, e.g. human being mentally imagining the third position of the observed other vehicle within the environment). Claim 17 recites the following limitations directed to an abstract idea: generating a representation of an object detected by a perception system of a vehicle (mental process: human being observing an environment and imagining, drawing, or describing an object, wherein the perception system is the human being, e.g. human being creating a mental image of another vehicle within the environment); assessing map information identifying center points of traffic lanes to identify a center point of a traffic lane that is closest to a location of the object (mental process: human being observing the environment and mentally performing distance estimations between a center lane point and the object); determining a location for the representation of the object based on a distance between the identified center point and the location of the object (mental process: human being observing the environment, seeing the object, and comparing a location of the object to the center of a lane). Regarding claims 1 and 17, the following limitations are directed to additional elements: Wherein the steps in the body of the claim are performed by “one or more processors”, and the method includes an extra-solution activity step of displaying a result. With respect to step 2A, prong 2, the additional elements fail to integrate the abstract idea into a practical application. The claims are not directed to, or limited to, a technical solution solving a technical problem. They fail to provide an improvement to a technology or to the functioning of a computer. See MPEP 2106.04(d)(1). Instead, the additional elements merely recite (for example, additional element defined as (a) above), at a high level or generality, a general purpose computing structure that are used as tools for implementing the abstract idea. Thus, the examiner finds these additional elements are mere instructions to implement the judicial exception. See MPEP 2106.05(f) in light of 2106.04(d). Regarding additional elements (b) above, these limitations merely add insignificant extra-solution activity in the form of displaying the generated representation of the object on a display of the vehicle. As such, the examiner must conclude the claimed invention is not integrated into a practical application. With respect to step 2B, the additional elements fail to recite significantly more alone or in combination with the abstract idea. The claims are not directed to, or limited to, a technical solution solving a technical problem. They fail to provide an improvement to a technology or to the functioning of a computer. See MPEP 2106.04(d)(1). Instead, the additional elements merely recite (for example, additional element defined as (a) above), at a high level or generality, a general purpose computing structure that are used as tools for implementing the abstract idea. Thus, the examiner finds these additional elements are mere instructions to implement the judicial exception. See MPEP 2106.05(f) in light of 2106.04(d). Regarding additional elements (b) above, these limitations merely add insignificant extra-solution activity in the form of displaying the generated representation of the object on a display of the vehicle. As such, the examiner must conclude the claimed invention fails to recite significantly more alone or in combination with the abstract idea. Therefore, the examiner concludes that claims 1 and 17 are directed to an abstract idea without significantly more. Regarding claim 2, this claim recites the following limitations that further recite an abstract idea: receiving a set of data points from the perception system, the data points of the set of data points corresponding to the object detected during a sweep of a laser of a laser sensor (mental process: human being observing multiple points within the environment, which could correspond to those detected by a laser sensor sweep. Examiner note: a step of a laser sensor performing a laser sweep is not positively recited as being a required method step). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 3, this claim recites the following limitations that further recite an abstract idea: prior to receiving the set of data points, receiving an earlier set of data points corresponding to the object detected during a first sweep of the laser prior to the sweep of the laser (mental process: human being making multiple observations of the environment to collect two sets of data points, which could correspond to a first sweep of a laser. Examiner note: a step of a laser sensor performing a first laser sweep is not positively recited as being a required method step); determining a length based on the earlier set of data points (mental process: human being observing an object and determining its length based on a prior observation); wherein generating the representation is further based on the length (mental process: human being observing the object and visually noting its length/size). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 4, this claim recites the following limitations that further recite an abstract idea: wherein the length is determined further based on an angular width of a side of the object corresponding to the length from a perspective of the laser (mental process: since a human being can perform the step of determining length, as from earlier claims on which 4 depends, it follows that the human being can make the length determination with any relevant information). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 5, this claim recites the following limitations that further recite an abstract idea: wherein the angular width is less than a minimum threshold angular width value (mental process: since a human being can perform the step of determining length, as from earlier claims on which 5 depends, it follows that the human being can make the length determination with any relevant information). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 6, this claim recites the following limitations that further recite an abstract idea: retrieving map information identifying shapes and locations of lanes as well as locations of center points for the lanes (mental process: human being observing shape and location of lanes and their center points). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 7, this claim recites the following limitations that further recite an abstract idea: wherein the map information further includes heading information identifying headings for the lanes (mental process: human being observing how his vehicle and other vehicles are traveling on a roadway); determining a first orientation based on heading of a lane associated with the representative point (mental process: human being observing an orientation of the object based on the heading); determining a second orientation based on the set of data points (mental process: human being observing the object (based on data points) and an orientation for the representation of the object); combining the first orientation and the second orientation in order to determine a third orientation for the representation of the object based on the distance, and wherein the representation for the object is generated using the third orientation (mental process: human being observing over time an object that is moving or observing a stationary object from a moving vehicle). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 8, this claim recites the following limitations that further recite an abstract idea: when the distance is a first distance, the first orientation has a first contribution to the third orientation mental process: human being considering a final/third orientation of the object based on previous observed orientations and corresponding observed distances); when the distance is a second distance greater than the first distance, the second orientation has a second contribution to the third orientation that is greater than the first contribution mental process: human being considering a final/third orientation of the object based on previous observed orientations and corresponding observed distances). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 9, this claim recites the following limitations that further recite an abstract idea: selecting a representative point from the center points based on the set of data points (mental process: human being observing the object and noting a point to be considered a representative point). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 10, this claim recites the following limitations that further recite an abstract idea: wherein the representation of the object is a smoothed representation (mental process: human being imagining moving his mental eye around the object to visualize its details, creating a drawing of the object to capture its essence, visualizing the object in his mind while focusing in its features and textures, forming a mental image of the object and imagining walking toward it, and/or considering how the object might look from different angles, enhancing his mental representation of the object.). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 11, this claim recites the following limitations that further recite an abstract idea: when the distance is a first distance, the first position has a first contribution to the third position (mental process: human being considering a final/third position of the object based on previous observed positions and corresponding observed distances); when the distance is a second distance greater than the first distance, the second position has a second contribution to the third position that is greater than the first contribution (mental process: human being considering a final/third position of the object based on previous observed positions and corresponding observed distances). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 12, this claim recites the following limitations that further recite an abstract idea: determining a first width for the representation of the object based on a width of the lane associated with the representative point (mental process: human being observing lane width with respect to a given/representative point); determining a second width for the representation of the object based on the set of data points (mental process: human being observing a subsequent width of the object); combining the first width and the second width in order to determine a third width for the representation of the object based on the distance, and wherein the representation for the object is generated using the third width (mental process: human being mentally considering observed widths and deciding on a final/third width that best represents the object based on distance). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 13, this claim recites the following limitations that further recite an abstract idea: when the distance is a first distance, the first width has a first contribution to the third width (mental process: human being considering a final/third width of the object based on previous observed widths and corresponding observed distances); when the distance is a second distance greater than the first distance, the second width has a second contribution to the third width that is greater than the first contribution (mental process: human being considering a final/third width of the object based on previous observed widths and corresponding observed distances). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 14, this claim recites the following limitations that further recite an abstract idea: Determining a mix value based on a distance between the first position and the second position (mental process: human being considering both observed positions of the object and observed distances therebetween); Wherein determining the third position includes using the mix value in conjunction with the first position and the second position (mental process: human being observing the object and considering multiple positions of the object as well as considering distance between positions of the object either moving with respect to the human being or stationery with respect to the human being moving). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 15, this claim recites the following limitations that further recite an abstract idea: determining the third position includes using the mix value to determine a first contribution by the first position and a second contribution by the second position (mental process: human being considering a final/third position of the object based on previous observed positions and corresponding observed distances). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 16, this claim recites the following limitations that further recite an abstract idea: determining the third position includes summing the first contribution and the second contribution (mental process: human being performing mental addition). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 1 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 18, this claim recites the following limitations that further recite an abstract idea: receiving a set of data points from the perception system, the data points of the set of data points corresponding to the object detected during a sweep of a laser of a laser sensor (mental process: human being observing multiple points within the environment, which could correspond to those detected by a laser sensor sweep. Examiner note: a step of a laser sensor performing a laser sweep is not positively recited as being a required method step); determining a location of the object from the set of data points (mental process: human being observing a location of the object based on observing multiple points within the environment). There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 17 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 19, this claim recites the following limitations that further recite an abstract idea: determining an orientation of the object based on the set of data points (mental process: human being observing the object and determining its orientation); determining an orientation for the representation of the object based on the orientation of the object and the distance between the identified center point and the location of the object (mental process: human being using his observed orientation of the object and the distance of the object from a lane center point to create a mental image of the object); There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 17 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Regarding claim 20, this claim recites the following limitations that further recite an abstract idea: determining a width of the object based on the set of data points (mental process: human being observing a width of the object based on the environment); determining a width for the representation of the object based on the width of the object and the distance between the identified center point and the location of the object (mental process: human being imagining the object’s width based on its distance from the center of the lane); There are no new additional elements recited in these claims. Thus, the analysis for step 2A, prong 2 and step 2B is identical to that found within claim 17 and is hereby incorporated by reference, even with an additional consideration for the claims as a whole. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-5, 8, 11, 13-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites “determining a length” but fails to indicate of what. The specification, at 0093, indicates determining length of a representation, length of a prior set of data, and length of an object, so the claim language is unclear as to which of these length refers. For purposes of examination, the limitation at issue will be interpreted as a length associated with data points that could indicate length of an object or a representation of an object. Claims 11 and 13 ultimately depend on claim 1, which recites “a distance between the first position and the second position”. Claims 11 and 13 recite “when the distance is a first distance” and then another limitation “when the distance is a second distance greater than the first distance”. “The distance” of claims 11 and 13 refers to “a distance between the first position and the second position” of claim 1, which sets forth a single distance determination, not multiple, different possible distances. Thus, the claims are indefinite as to how the single distance of claim 1 is now intended to be defined as two different distances. It is unclear how applicant intends to define “the distance” as the distance between first and second positions as in claim 1 and then to later define the distance as a first distance and then alternatively as a second and different distance. That is to say, it is unclear how applicant intends for a single distance to be interpreted as two separate, distinct, and different distances without having first established multiple distance determinations. For purposes of examination, the limitations at issue will be interpreted as multiple distances being determined at subsequent times based on object position/navigation through an environment. Claim 14 recites “determining a mix value based on a distance between the first position and the second position”. Firstly, the examiner believes “based on a distance” is intended to refer to the distance determination introduced in claim 1. If that is applicant’s intention, “a distance” of claim 14 should be rewritten as “the distance” to correct for an antecedent basis deficiency. If applicant instead intends for “a distance” of claim 14 to refer to a new, distinct and different distance than that recited in claim 1, the claim would be indefinite for failure to clearly establish how the two distances are intended to be distinct, such as how they are each determined. For purposes of examination, the limitation at issue will be interpreted as referring to the distance established in claim 1. Secondly, claims 14-16 recite “a mix value”, of which the claims establish no definition as to what the value is of, comprises, or how the value is otherwise determined. Looking to the specification for guidance, there is no set definition for “mix value”, and in fact, the specification uses both “mix amount” and “mix value” seemingly interchangeably, but is not clear if applicant intends for mix value and mix amount to be the same or different limitations. An example of a “mix” is recited at 0082 as a “mix amount” based on distance between the average location and the representative point and a maximum distance value and at 0086 as example equation “mix amount = distance to representative point/maximum distance value”. However, no clear definition of “mix amount” or “mix value” is presented, only such examples. Further, the examples provided in the specification do not clearly translate to the claim language, thus it is unclear how the mix value based on a distance between the first and second positions is determined, and how the mix value is related to the third position. For purposes of examination, the example(s) of 0082 and 0086 will be considered herein. Additionally, claim 15 recites “using the mix value to determine a first contribution by the first position and a second contribution by the second position” which is indefinite because it is unclear what operation is intended with use of the word “by”. For purposes of examination, this limitation will be interpreted as tangentially relating the mix value to the first and second contributions. Claim 8 is indefinite for the same reasons presented above with respect to the first and second distance considerations. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 17 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yi (US 2017/0343374). 1: Yi discloses a method for generating a representation of an object detected by a perception system of a vehicle [as illustrated in fig. 8B], the method comprising: determining, by one or more processors, a first position for the representation of the object based on a location of a representative point selected from a plurality of locations of lane center points [at least 0070 teaches that a center point of a guiding track may be overlapped with a center position of a lane corresponding to a lane object where the guiding track object are superimposed and displayed, and then the position of the guiding track may be determined according to a high precision map and a preset width corresponding to the guiding track. Further, positions of respective points on the contour of the guiding track may be determined, wherein these points correspond to the center of the lane and wherein a vehicle corresponds to the claimed object. A first position of the vehicle is determined in 0073 as a positioning state of the vehicle at time k with respect to xsubk and ysubk]; determining, by the one or more processors, a second position for the representation of the object based on a set of data points received from the perception system that correspond to the object [A second position of the vehicle is determined in 0073 as a positioning state of the vehicle at a subsequent time k with respect to xsubk and ysubk, since the vehicle would be moving or would be predicted to be moving, along the roadway]; determining, by the one or more processors, a third position for the representation of the object based on a distance between the first position and the second position [Since 0073 teaches determining vehicle positions based on x and y geodetic coordinates at a given time k, it follows that a third position of the vehicle would be predicted based on the actual or predicted movement of the vehicle from the determinations of the first and second positions. Thus, determination of a third position would be based on the distance between the first and second positions as the vehicle moves or is predicted to move along the roadway, with the third position being further along the roadway (in distance) than the first and second positions (distance).]; generating, by the one or more processors, the representation of the object using the third position [fig. 8B illustrates a display of the representation of the object, thus generation of the representation of the object is inherent in order to display the representation of the object]; and displaying, by the one or more processors, the generated representation of the object on a display of the vehicle [fig. 8B illustrates a display of the generated representation of the object]. 17: Yi discloses a method for generating a representation of an object detected by a perception system of a vehicle, the method comprising: accessing, by one or more processors, map information identifying center points of traffic lanes to identify a center point of a traffic lane that is closest to a location of the object [at least 0070 teaches that a center point of a guiding track may be overlapped with a center position of a lane corresponding to a lane object where the guiding track object are superimposed and displayed, and then the position of the guiding track may be determined according to a high precision map and a preset width corresponding to the guiding track. Further, positions of respective points on the contour of the guiding track may be determined, wherein these points correspond to the center of the lane and wherein a vehicle corresponds to the claimed object. A first position of the vehicle is determined in 0073 as a positioning state of the vehicle at time k with respect to xsubk and ysubk]; determining, by the one or more processors, a location for the representation of the object based on a distance between the identified center point and the location of the object [0070 teaches a center point of the guiding track may be overlapped with a center position of the lane corresponding to the lane object where the guiding track object are superimposed and displayed, and then the position of the guiding track may be determined according to the high precision map and a preset width corresponding to the guiding track. For example, positions of respective points on the contour of the guiding track may be determined]; and displaying, by the one or more processors, the representation of the object on a display of the vehicle based on the location for the representation of the object [0070 teaches displaying vehicle and lane information; fig. 8B illustrates the display]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yi (US 2017/0343374). 14: Yi teaches determining a mix value based on a distance between the first position and the second position, and wherein determining the third position includes using the mix value in conjunction with the first position and the second position [0037 teaches determining a position corresponding to the maximum probability as the position of the vehicle, wherein a GPS coordinate corresponding to the vehicle is acquired, a lane line in the road condition image to the ground is projected, a distance between the lane line projected to the ground and the lane line in the high precision map as a measurement error is taken, and a probability distribution of the position of the vehicle is calculated. Thus, a person of ordinary skill in the art would find obvious that since the maximum probability corresponds to position and to distance, this indicates also a maximum position and/or maximum distance, wherein the mix value corresponds to a consideration of distance between two points and a maximum distance.]. 15: Yi teaches determining the third position includes using the mix value to determine a first contribution by the first position and a second contribution by the second position [0037 teaches that the consideration of distance between two points and a maximum distance (see rejection of claim 14) corresponds to the mix value, and thus a person of ordinary skill in the art would find obvious that the mix value comprises some value that is considered with the first and second positions, which results in creating some contribution thereto.]. 16: Yi does not explicitly teach determining the third position includes summing the first contribution and the second contribution, however, a person of ordinary skill in the art would find obvious that summing multiple contributions, or partial values, renders a whole amount of something, and in this case, to determine an accurate distance and position value for navigation of a traveling vehicle. Considering partial values for distance of position determinations is common practice in the art, specifically to sum those partial values in order to determine a full value. Such as summing partial distance values to determine a complete and accurate distance, or summing partial positional values to determine a complete and accurate position as evidenced in the disclosure above. Claim(s) 2-10, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yi (US 2017/0343374) in view of Stout (JP 2015-212942). 2, 18 mutatis mutandis: Yi does not explicitly teach, but Stout teaches a set of data points from the perception system, the data points of the set of data points corresponding to the object detected during a sweep of a laser of a laser sensor [0004-0005] and for claim 18, and determining a location of the object from the set of data points [0044-0045 teach 3D imaging and a 3D coordinate system representing an object within an environment]. 3: Yi does not explicitly teach, but Stout teaches prior to receiving the set of data points, receiving an earlier set of data points corresponding to the object detected during a first sweep of the laser prior to the sweep of the laser; and determining a length based on the earlier set of data points, and wherein generating the representation of the object is further based on the length [0043-0044 teach laser scanning (multiple iterations of data point collection) and 3D point clouds representative of an outer surface of an object, corresponding at least in part to the length of an object.]. 4: Yi does not explicitly teach, but Stout teaches the length is determined further based on an angular width of a side of the object corresponding to the length from a perspective of the laser [in addition to the rejection of claim 3, 0040 teaches a laser rangefinder that is reflected by a rotating mirror that collects and scans the distance dimension in one or two dimensions around the scene being digitized at specified angular intervals]. 5: Yi in view of Stout does not explicitly teach the length is determined further based on whether the angular width is less than a minimum threshold angular width value, however, Stout teaches at 0040 a laser rangefinder that is reflected by a rotating mirror that collects and scans the distance dimension in one or two dimensions around the scene being digitized at specified angular intervals. Since the claim language only requires that length is determined based on “whether” the angular width is of a certain value, it follows that a person of ordinary skill in the art would find obvious that object length would be based on any angular width determination. And since the prior art teaches at 0040 specified angular intervals, it further follows that those angular intervals would reasonably correspond to a minimum threshold angular width value. Regarding claims 2-5 and 18, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the vehicle navigation disclosed in Yi with the vehicle navigation with laser(s) disclosed in Stout with a reasonable expectation of success because employing a laser as a GPS alternative also allows for determination of data points within an environment, in order to procure a distance image indicating a location of object(s) in an environment for safe navigation of a vehicle through the environment. 6: Yi teaches retrieving, by the one or more processors, map information identifying shapes and locations of lanes as well as locations of center points for the lanes [0047 teaches lane parameters may include the number of lanes, positions of lane lines and lane attributes, for ex. a straight-going lane, a turn lane and other lane attributes. 0048 teaches a map of the lanes. At least 0056 teaches displaying a guiding track object superimposed and displayed in the center of the current lane in the road condition image.]. 7: Yi teaches the map information further includes heading information identifying headings for the lanes, and the method further comprising: determining a first orientation based on a heading of a lane associated with the representative point; determining a second orientation based on the set of data points; and combining the first orientation and the second orientation in order to determine a third orientation for the representation of the object based on the distance, and wherein the representation for the object is generated using the third orientation [at least 0056 and 0058 teach driving orientations of a vehicle corresponding to headings, the multiple orientations corresponding to driving the vehicle over time and/or along a path]. 8: Yi teaches the first orientation and the second orientation are used such that: when the distance is a first distance, the first orientation has a first contribution to the third orientation, and when the distance is a second distance greater than the first distance, the second orientation has a second contribution to the third orientation that is greater than the first contribution [0037 teaches that the consideration of distance between two points, which results in creating some contribution thereto. At least 0056 and 0058 teach driving orientations of a vehicle, the multiple orientations corresponding to driving the vehicle over time and/or along a path]. 9: Yi teaches a representative point from the center points based on the set of data points [at least 0070 teaches a center point of the guiding track may be overlapped with a center position of the lane corresponding to the lane object where the guiding track object are superimposed and displayed (from a collection of data points)]. 10: Yi does not explicitly teach, but Stout teaches the representation of the object is a smoothed representation [at least 0111 teaches smoothed image processing]. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the vehicle navigation disclosed in Yi with the smoothed representation of an object disclosed in Stout with a reasonable expectation of success because smoothing a representation of an object enhances visual quality, reduces noise, preserves features, and/or acts as preprocessing for further analysis such as edge detection or object recognition. 19: Yi teaches the map information further includes heading information identifying headings for the lanes, and the method further comprises: determining an orientation of the object based on the set of data points; and determining an orientation for the representation of the object based on the orientation of the object and the distance between the identified center point and the location of the object, and wherein displaying the representation is further based on the orientation for the representation [at least 0056 and 0058 teach driving orientations of a vehicle corresponding to headings, the multiple orientations corresponding to driving the vehicle over time and/or along a path; 0070 teaches a center point of the guiding track may be overlapped with a center position of the lane corresponding to the lane object where the guiding track object are superimposed and displayed, and then the position of the guiding track may be determined according to the high precision map and a preset width corresponding to the guiding track. For example, positions of respective points on the contour of the guiding track may be determined.]. 20: Yi teaches determining a width of the object based on the set of data points; and determining a width for the representation of the object based on the width of the object and the distance between the identified center point and the location of the object, and wherein displaying the representation is further based on the width for the representation [0070 teaches a center point of the guiding track may be overlapped with a center position of the lane corresponding to the lane object where the guiding track object are superimposed and displayed, and then the position of the guiding track may be determined according to the high precision map and a preset width corresponding to the guiding track. For example, positions of respective points on the contour of the guiding track may be determined; 0047 teaches various lane attributes]. Allowable Subject Matter Claim 11 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101 and 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action, and to include all of the limitations of the base claim and any intervening claims. Claim 12 (and consequently claim 13, if re-written to overcome the rejection(s) under 35 U.S.C 101 and 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action), is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is an examiner’s statement for indicating allowable subject matter. The closest prior art, Yi and Stout, neither alone nor in combination, sufficiently discloses the claimed invention in such a way that the combination of limitations would be rendered anticipatory or obvious to one of ordinary skill in the art. Stout teaches determination of 3D coordinates and positional data of objects within an environment to be scanned. This data includes distance, shape and dimensions of the objects. Thus, a position of an object can be made. However, Stout does not reasonably teach or suggest imposing a weight of position contributions based on distance of a certain value, as claimed. Stout further fails to reasonably teach or suggest combining widths of the object in order to determine a subsequent widths based on a given distance, as claimed. Yi teaches determining object characteristics, and specifically, determining lane parameters and attributes based on GPS information. However, Yi also fails to reasonably teach or suggest imposing a weight of position contributions based on distance of a certain value, as claimed. Yi further fails to reasonably teach of suggest combining widths of the object in order to determine subsequent widths based on a given distance, as claimed. Thus, the closest prior art, when taken alone, or, in combination, cannot be construed as reasonably teaching or suggesting all of the elements of the claimed invention as arranged, disposed, or provided in the manner as claimed by the Applicant. This statement is not intended to necessarily state all the reasons for allowance or all the details of why the claims are allowed and has not been written to specifically or impliedly state that all the reasons for allowance are set forth (MPEP 1302.14). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha K. Nickerson whose telephone number is (571)270-1037. The examiner can normally be reached Generally Monday-Tuesday, 7:00AM-3:00PM CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached at (571)272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SAMANTHA K. NICKERSON Primary Examiner Art Unit 3645 /SAMANTHA K NICKERSON/Primary Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Aug 02, 2021
Application Filed
Dec 30, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12523770
COMPACT LIDAR SYSTEM
2y 5m to grant Granted Jan 13, 2026
Patent 12511874
SMART NAVIGATION METHOD AND SYSTEM BASED ON TOPOLOGICAL MAP
2y 5m to grant Granted Dec 30, 2025
Patent 12493106
METHOD FOR IMPLEMENTING A LIGHT DETECTION AND RANGING LIDAR DEVICE IN A MOTOR VEHICLE
2y 5m to grant Granted Dec 09, 2025
Patent 12487363
LiDAR SYSTEMS AND METHODS DETERMINING DISTANCE TO OBJECT FROM LiDAR SYSTEM
2y 5m to grant Granted Dec 02, 2025
Patent 12487357
APPARATUS AND METHOD FOR PSEUDO THERMAL LIGHT SOURCE GHOST IMAGING AND RANGE SENSING USING NARROW-BAND SPONTANEOUS EMISSION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+15.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 597 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month