DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 of US application 18/825,869 were filed on 9/5/24.
On 11/18/24, applicant filed a preliminary amendment. Claims 1-20 were cancelled. claims 21-40 were newly added. Claims 21-40 are presently pending and presented for examination.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“trajectory timer” configured to generate temporal characteristics in claims 21 and 28; and output various kinematic values in claims 24-25
“trajectory shaper” configured to generate spatial characteristics in claims 21 and 29-30
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification reveals that, in at least paragraphs [0020] and [0026], the relevant operations are all performable by processors.
This is adequate structure to perform the claimed functions, so no 112 rejections are given and no further action is required with respect to this 112(f) interpretation.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 31-39 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because “One or more computer-readable media” could be interpreted to be a signal per se, which is not eligible subject matter under 35 USC 101.
Examiner’s note to help applicant overcome the 101 rejections: applicant can overcome these rejections by making the following changes:
In claim 31, “One or more computer-readable media” should be “One or more non-transitory computer-readable media”
In claims 32-39, “The one or more computer-readable media” should be “The one or more non-transitory computer-readable media”
The addition of the word “non-transitory” in these particular places as suggested above would prevent the claims from being directed to a signal per se and thus would overcome the 101 rejections.
Allowable Subject Matter
Claims 21-30 and 40 are allowable over the prior art of record. Furthermore, claims 31-39 are objected to for containing allowable subject matter, but would be allowable if rewritten to resolve the 101 rejections indicated in the previous section of this office action.
The closest prior art of record is Onofrio et al. (US 20200249684 A1) in view of Chen et al. (US 20210179092 A1) in further view of Frossard et al. (US 20190147610 A1) in further view of Reschka et al. (US 11577741 B1), hereinafter referred to as Onofrio, Chen, Frossard, and Reschka, respectively. The following is a statement of reasons for the indication of allowable subject matter:
Regarding claims 21, 31, and 40, Onofrio discloses An autonomous vehicle control system for controlling an autonomous vehicle (See at least Figs. 7A and 7C in Onofrio: Onofrio discloses autonomous vehicle 700 [See at least Onofrio, 0022]), the autonomous vehicle control system comprising:
one or more processors (See at least Figs. 7A and 7C in Onofrio: Onofrio discloses that The controller(s) 736 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving [See at least Onofrio, 0075]); and
one or more computer-readable media storing instructions that when executed by the one or more processors cause the autonomous vehicle control system to perform operations (See at least Figs. 7A and 7C in Onofrio: Onofrio discloses that The controller(s) 736 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving [See at least Onofrio, 0075]), the operations comprising:
obtaining data indicative of an object within an environment of the autonomous vehicle (See at least Fig. 6 in Onofrio: Onofrio discloses The method 600, at block B602, includes processing, via a plurality of perception sources, sensor data to generate first data representative of a plurality of perception outputs [See at least Onofrio, 0065]. Also see at least Fig. 5 in Onofrio: Onofrio further discloses that, For example, the perception sources—e.g., the DNN(s) 104, the HD map(s) 106, and/or the object trace(s) 108—may process the sensor data 102 to generate the perception outputs [See at least Onofrio, 0065]);
generating, based on a lane graph associated with the environment, a motion goal path for the object (Onofrio discloses that The object trace(s) 108 may leverage motion of the vehicle 700—e.g., via one or more of MU sensors 766 and/or GNSS sensors 758—and/or image data generated by one or more cameras, RADAR data generated by one or more RADAR sensors 760, LIDAR data generated by one or more LIDAR sensors 764, and/or other sensor types to track and compute a trajectory of one or more other objects (e.g., vehicles) along the driving surface of the vehicle 700 [See at least Onofrio, 0030]).
Chen teaches generating, by a trajectory timer and based on the motion goal path, temporal characteristics of a motion trajectory of the object for the motion goal path, wherein the temporal characteristics indicate an acceleration profile of the object (See at least Fig. 1 in Chen: Chen teaches that the other-vehicle trajectory estimation module 10 calculates the deviation amount of the host vehicle with respect to a lane center according to a plurality of environment-sensing data and estimates the other-vehicle trajectory of at least one other vehicle that neighbors the host vehicle [See at least Chen, 0025]. Chen further teaches that The trajectory is defined as a combinatorial function composed of a position and a speed at every estimated unit time point [See at least Chen, 0025]. Chen further teaches that Thus, the other-vehicle trajectory includes a future path and a future speed [See at least Chen, 0025]).
However, none of the prior art of record, taken either alone or in combination, teaches or suggests generating, by a trajectory shaper and based on the motion goal path and the temporal characteristics, spatial characteristics of the motion trajectory of the object;
generating a motion plan of the autonomous vehicle based on the motion trajectory of the object, wherein the motion plan is indicative of a motion trajectory of the autonomous vehicle.
The above missing limitations appear to be deceptively simple, but are actually more complicated than they look. In order for a reference to read on the missing limitations, the reference would have to teach where, for a single object, multiple temporal characteristics (pertaining to velocity or acceleration values) are calculated based on a supposed future path of the object, and then a predicted path of the object is determined based on those multiple acceleration values. At face value, this seems to be nonsensical, since there is no point in using the acceleration or velocity values derived from a path to calculate that same path. The only context in which this makes sense is machine learning. Paragraph [0013] of the specification explains that the purpose of these claim steps is to use machine learning on an approximations of a future path of the object to derive multiple acceleration values, which are then combined with the approximations of the future path of the object to derive more accurate trajectories of the object. This particular machine learning method is not contemplated in the prior art.
Chen does not read on the missing limitations, because while Chen does teach calculating an object acceleration profile with respect to various positions in a predicted future trajectory of the object (See at least [Chen, 0025]), Chen does not teach or suggest recalculating or refining any part of the predicted future trajectory of the object based on that acceleration profile.
Frossard comes somewhat close to teaching the missing limitations, since Frossard teaches that sensor data is processed by a machine learning model to generate object detections (See at least [Frossard, 0099-0100]). Frossard further teaches that a flow graph is constructed to formulate a trajectory for each object detection (See at least [Frossard, 0102]). Frossard further teaches that one or more trajectories are generated for each object detection using a linear program (See at least [Frossard, 0103]). However, Frossard does not teach or suggest anything pertaining to updating trajectories calculated using acceleration or velocity values derived from the approximate paths for each object. There is machine learning, but only for very early object detection, not for processing the trajectory approximation (See at least [Frossard, 0099-0100]). Accordingly, Frossard is not in field of endeavor of using path predictions for an object to calculate an acceleration profile which is then used to refine the path predictions for the object.
Reschka also comes somewhat close to teaching the missing limitations: Reschka teaches that a primary AI 116 of a vehicle determines presence and track of an object (See at least [Reschka, Col 5, line 62-Col 6, line 17]). Reschka further teaches that a secondary AI may take over in anomalous situations, based on trajectories and detected object position or track, and can implement techniques based on acceleration of objects around the vehicle (See at least [Reschka, Col 5, line 62-Col 6, line 17]). Reschka further teaches that the secondary AI can predict trajectories for the object (See at least [Reschka, Col 6, lines 18-43]). Reschka further teaches that the prediction component 328 may predict one or more predicted object trajectories for a specific object detected by the perception component 326 based on predicted acceleration (See at least [Reschka, Col 16, lines 20-46]). Reschka further teaches that all of these computations may be implemented onboard the vehicle, even the components in Fig. 3 such as components 328 and 326 (See at least [Reschka, Col 12, line 56-Col 13, line 6]).
However, the acceleration value of Reschka is not derived from an approximation of a predicted path of the object, unlike the claimed invention. Accordingly, Reschka cannot read on the claimed invention.
For at least the above stated reasons, claims 21, 31, and 40 contain allowable subject matter.
Regarding claims 22-30 and 32-39, these claims also contain allowable subject matter at least by virtue of their dependence from claims 21 and 31, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAEEM T ALAM whose telephone number is (571)272-5901. The examiner can normally be reached M-F, 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FADEY JABR can be reached at (571) 272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAEEM TASLIM ALAM/Examiner, Art Unit 3668