DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant's election with traverse of Invention IV (claims 11-13) in the reply filed on 02/11/2026 is acknowledged. Applicants' election is noted with traverse, the traversal is not persuasive. Although Applicants point to similar technology, the groups as set forth to independent and distinct technical features such that the search is diverse for each group (Invention IV, although recites aspect ratios, requires supervised training / classifier training for organizing or classifying keypoint subgroups, including generating reference embedding clusters during training and classifying additional subgroup(s) as road boundary and / or incidental marking whereas Invention III, although recites aspect ratios as well, requires aspect-ratio based clustering and geometric normalization operations for the subgroup(s), such as clustering by predetermined aspect ratios, cropping via minimum shape dimensions plus margins, rotating to upright orientation, and resizing to a fixed size whereas Invention II requires ROI generation / cropping of sensed information units per subgroup and generating embeddings based on the cropped sensed information units (i.e., an ROI / cropping pipeline distinct from geometric normalization and distinct from classifier-training subject matter) whereas Invention I requires signature-based classification subject matter, including generating and using one or more higher-dimensional signatures derived from embeddings for classification / matching) as differences in underlying concepts, structures and required prior art searches. Thereby the search for the generic claims would not encompass the specific subject matter of the dependent claims for each specific group. And a search directed to one group would not necessarily be expected to disclose the most relevant prior art for the other groups because each of these groups require their own specific different and distinct search strategies and search queries. The examination of all of the claims would indeed place an undue burden on the Examiner, and for at least these reasons, the restriction requirement is still deemed proper and is therefore made FINAL.
Accordingly, examination will proceed on the elected invention only. The application has pending claims 1-20 (non-elected claims 2-4, 6-10 and 15-19 are withdrawn from further consideration). Thus, claims 1, 5, 11-14, and 20 are being examined as detailed below.
Claim Objections
Claims 11–14 and 20 are objected to because of the following informalities: Claims 11–14 and 20 contain the wording “keypointsof keypoints” and/or “keypoints of keypoints”, which appear to be typographical errors. Applicant is required to correct the claims to recite the intended phrase, e.g., “keypoints” or other intended wording, with appropriate spacing and clarity, to place the claims in proper form. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 11–13, 14 and 20 are rejected under 35 U.S.C. §103 as being unpatentable over Gao (Gao et al, US 2021/0150350 A1, 2021) in view of Kheyrollahi (Kheyrollahi et al., "Automatic real-time road marking recognition using a feature driven approach", 2010).
Regarding claim 1, with deficiencies of Gao noted in square brackets [], Gao teaches a method for travel lane element classification, comprising:
obtaining, via a processing circuit, information indicative of a travel lane including one or more travel lane elements located within an environment of a vehicle (Gao, [Fig. 3, Step 302] & [0030], teaching a system receiving / obtaining data including map features of a map of the environment surrounding the vehicle; wherein map features, in [0031], are characterized by scene data, the environment includes road lane boundaries.);
generating a plurality of keypoints from the information (Gao: in [Fig. 3, Step 304 - 306] & [0056–0057], teaches that a lane boundary contains multiple control points / key points that build a spline and that geographic entities can be approximated as polylines defined by one or more control points; in [0058] explained these polylines are sets of vectors; also in [0072] teaches how to generate these vectors that connect a plurality of keypoints along the map feature, including uniformly sampling keypoints along the splines.);
organizing the plurality of keypoints into one or more subgroups of keypoints, wherein each of the one or more subgroup of keypoints is indicative of one or more categories of travel lane elements (Gao: in [Fig. 3, Step 304 - 306], [0031], & [0071-0072], Gao organizes keypoints into "a respective polyline of each of the features of the map that represents the feature as a sequence of one or more vector". The system can then sequentially connect the neighboring key points along the map feature into vectors, thereby organizing the keypoints into polyline subgroups, polyline subgroup(s) indicative per road lane boundaries, crosswalk, stoplight, road signs categories respectively);
generating one or more embeddings of the one or more subgroup of keypoints (Gao: in [Fig. 3, Step 308] & [0078-0079], teaches processing the respective polylines (including polylines of map features such as lane boundaries) using an encoder neural network to generate polyline features [embeddings]); and
predicting [classifying], based on the one or more embeddings, the one or more organized subgroup of keypoints as indicative of a travel lane marker (Gao, as said in [0061], generating a trajectory prediction for a given one of the agents in the environment by processing the polyline features of the polyline that represents the trajectory of the agent using a trajectory decoder neural network; then in [0074], teaches each vector can include an identifier of road feature type including lane boundary, and thus the polyline subgroup is associated with a travel-lane-element category (lane boundary / lane marker); then in [0080] the system generates a predicted trajectory for the agent from the polyline features which classifies polylines relative to map feature polylines (lane boundaries = travel lane markers) through "self-attention mechanism model interactions between polyline embeddings; see also in [0005] explains that object classification is one of trajectory prediction task.);
wherein the predicting [classifying] triggers a determination of a driving related operation to be executed by the vehicle (Gao, in [0040–0042], [0061], teaches an on-board planning system that makes autonomous / semi-autonomous driving decisions by generating a planned vehicle path, and uses trajectory prediction model outputs to generate planning decisions and cause the vehicle to follow the planned path (e.g., by autonomously controlling steering)).
Gao however fails to explicitly disclose [classifying] as recited where Kheyrollahi discloses:
classifying, based on the one or more embeddings, the one or more organized subgroup of keypoints as indicative of a travel lane marker (Kheyrollahi, in [Page 4, Sec. 4 (Road-marking extraction)], teaches extracting a road-marking candidate as an organized set of points / pixels (i.e., an object / connected marking region), generating a feature-vector descriptor (i.e., an embedding / vector representation) for the extracted marking; then providing the feature-vector descriptor to a trained classifier to classify the marking into a road-marking category [Page 6-7, Sec. 5 (Road marking recognition)]; Because travel lane markers are road markings, Kheyrollahi’s classification of a road-marking candidate into a marking category corresponds to classifying the organized subgroup of points / keypoints as indicative of a travel lane marker when the classifier output indicates a lane-marking category.);
wherein the classifying triggers a determination of a driving related operation to be executed by the vehicle (Kheyrollahi, in [Page 1, (Abstract)], teaches that automatic road marking recognition lends support to both autonomous driving and augmented driver assistance such as situationally aware navigation systems”; in [Page 1, Sec. 1 (Introduction), Col. 2, paragraph 4]: further teaches that the results of the road-marking classification are post-processed “for either driver display or potential use by an autonomous driving decision engine”, i.e., the classification output is used by a vehicle decision engine to determine an appropriate driving-related operation.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gao’s trajectory-prediction / map-element pipeline to expressly include an embedding/vector-based classification stage for the organized point subgroup (polyline) as indicative of a travel lane marker, in view of Kheyrollahi, because (i) Gao already generates polyline features (vector embeddings) for organized polylines and processes those embeddings with neural-network prediction components for downstream autonomous-driving tasks, and Gao expressly recognizes that prediction tasks can include classification (e.g., Gao [0005] and Gao’s typed map-feature polylines); (ii) Kheyrollahi evidences that, in the art, organized point/pixel groups representing road markings are encoded into a feature-vector descriptor (including aspect ratio) and then an explicit trained classifier outputs a semantic marking label based on that vector; and (iii) adding an explicit classifier stage to Gao’s lane-related polyline embeddings (or using Gao’s polyline features as the feature-vector input to such a classifier) is a predictable use of prior-art elements according to their established functions, improving interpretability / robustness (true lane marker vs other markings) for downstream planning/control without changing the underlying technical operation of Gao’s polyline-embedding pipeline.
Regarding claim 5, Gao [as modified by Kheyrollahi] teaches the method of claim 1, wherein the obtaining step comprises obtaining, by an imaging sensor, a field of view image of a viewable area including the vehicle environment (Gao, in [0026] & [0028], discloses camera systems and sensor subsystems captures the surrounding vehicle environment, corresponding to obtain a field of view image of a viewable area including the vehicle environment.).
Regarding claim 11, Gao [as modified by Kheyrollahi] teaches the method of claim 1, wherein the organizing step comprises training a classifier to classify each of the one or more subgroup of keypoints based on the aspect ratios (Gao, in [0071–0075], teaches organizing keypoints/vectors into polyline subgroups for map features. Kheyrollahi, [Page 6-7, Sec. 5 (Road marking recognition)], teaches training classifier classifies marking objects based on a feature-vector descriptor that is primarily constructed from the aspect ratio (height/width). Accordingly, it would have been obvious to train a classifier for Gao’s polyline subgroups using an aspect ratio derived from the subgroup’s keypoint coordinates; since aspect ratio is a known, routinely used geometric feature in trained road-marking classifiers as taught by Kheyrollahi.).
Regarding claim 12, Gao [as modified by Kheyrollahi] teaches the method of claim 11, further comprising classifying, based on the organizing step, at least a second subgroup of keypoints as indicative of a road boundary.
([Fig. 3, Step 304 - 306], [0031], & [0071-0072], & [0083]: Gao teaches forming the organized subgroup (polyline) for each map feature from keypoints/vectors and include an identifier of road feature type including lane boundary: polyline subgroup(s) as indicative of road boundary respectively. Kheyrollahi teaches performing classification of an organized set of points/pixels representing a road marking [Page 6-7, Sec. 5 (Road marking recognition)].
Accordingly, when Gao’s organized polyline subgroup (road-boundary lane-boundary marking) is provided as the road-marking candidate to Kheyrollahi’s trained classifier (e.g., using a feature-vector descriptor derived from the subgroup’s geometry), the classifier outputs the road boundary / lane boundary category for that second subgroup, thereby classifying the second subgroup as indicative of a road boundary. It would have been obvious to use Kheyrollahi’s trained marking-classification stage to explicitly classify Gao’s organized lane-boundary polyline subgroup as “road boundary” because Gao already represents lane boundaries as distinct map-feature polylines and Kheyrollahi teaches that marking candidates are routinely encoded into a feature vector and classified by a trained classifier, yielding predictable results and improving downstream interpretation for vehicle planning/control. )
Regarding claim 13, Gao [as modified by Kheyrollahi] teaches the method of claim 12, further comprising classifying, based on the organizing step, at least a third subgroup of keypoints as indicative of an incidental marking (Gao: in [0031], [0056], [0069], Gao teaches forming additional organized polyline subgroups (from keypoints/vectors) for non-lane road features besides lane boundaries (including crosswalks, stoplight, stop sign, vehicle, pedestrian, cyclist and other surrounding agents) via the “road feature type” identifier associated with each polyline / map feature: polyline subgroup(s) as indicative of an incidental marking respectively. Kheyrollahi, [Page 6-7, Sec. 5 (Road marking recognition)], also teaches classification of organized set of extracted road-marking objects (non-lane / other marking candidates) (i.e., incidental markings) as distinct from lane markers.
Accordingly, when Gao’s organized polyline subgroup (road-boundary lane-boundary marking) is provided as the road-marking candidate to Kheyrollahi’s trained classifier (e.g., using a feature-vector descriptor derived from the subgroup’s geometry), the classifier outputs a non-lane / other marking category for that third subgroup, thereby classifying the third subgroup as indicative of an incidental marking. It would have been obvious to use Kheyrollahi’s trained marking-classification stage to explicitly classify Gao’s organized non-lane polyline subgroup(s) as “incidental marking” because Gao already represents multiple non-lane markings / features as distinct map-feature polylines and Kheyrollahi teaches that marking candidates are routinely encoded into a feature vector and classified by a trained classifier, yielding predictable results and improving downstream interpretation for vehicle planning/control. ).
Regarding claims 14 and 20, the rationale provided for claim 1 is incorporated herein. In addition, the method of claim 1 corresponds to the non-transitory computer-readable medium of claim 14, as well as the system of claim 20, and performs the steps disclosed herein. Therefore, the claims are all ineligible.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEN KUDO whose telephone number is (571)272-4498. The examiner can normally be reached M-F 8am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
KEN KUDO
Examiner
Art Unit 2671
/KEN KUDO/Examiner, Art Unit 2671
/VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671