Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is responsive to Applicant’s amendment and request for reconsideration of application 18/754,904 filed on February 10, 2026.
The amendment contains amended claims: 1, 12, and 20.
Applicant's request for reconsideration of the 112(b) of the rejection of the last Office action is persuasive and, therefore, the 112(b) of that action is withdrawn.
Claims 1, 12, and 20 are amended.
Claims 2-11, 13-19 were previously.
Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 3, 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over Golov (US 20210309261 A1) in view of Peng (US 10737717 B2).
Regarding claim 1, Golov discloses a processor-implemented method (FIG. 4, “processor 133”) with path generation (¶0045, “The current vehicle creates navigation path through the objects based on this map”), the method comprising:
obtaining input data that includes recognition sensor data (¶112, “ The one or more sensors 137 may include a visible light camera, an infrared camera, a LIDAR, RADAR, or sonar system, and/or peripheral sensors”) and state data (¶0068, “one or more sensors 220 configured to collect information regarding operational aspects of autonomous vehicle 200, such as speed, vehicle speed, vehicle acceleration, braking force, braking deceleration, and the like”);
inputting the input data into an artificial neural network (ANN) model and outputting output data corresponding to the input data (¶0112, “applies the sensor input to an ANN defined by the model 119 to generate an output that identifies or classifies an event or object captured in the sensor input, … Data from this identification and/or classification can be included in object data used to generated a navigation path for vehicle 113.”) in a single forward process (¶0096, “The vehicle 113 is controlled based on at least one output from the ANN model”); and
obtaining path data and control data corresponding to the path data, based on the output data (¶0100, “ The sensors of the vehicles 111, . . . , 113 generate sensor inputs for the ANN model 119 in autonomous driving and/or advanced driver assistance system to generate operating instructions, such as steering, braking, accelerating, driving, alerts, emergency response”).
Golov does not explicitly disclose but, Peng teaches outputting output data corresponding to the input data in a single forward process pass of the ANN model (FIG. 5, step 506, “generating a steering angle command using a feedforward artificial neural network (ANN) as a function of the constructed desired waypoint data and vehicle speed”. Col. 7, lines 50-52, “a single output node”); and
obtaining path data comprising waypoints, based on coordinates of elements included in the output data output in the single forward pass of the ANN model (col. 7, lines 50-54, “one node each for receiving the vehicle speed and the relative lateral position and heading with respect to vehicle centered coordinate system at each waypoint”); and
obtaining control data corresponding to the path data, based by performing one or more transformation operations on the output data output in the single forward pass of the ANN model FIG. 5, step 506, “generating a steering angle command using a feedforward artificial neural network (ANN) as a function of the constructed desired waypoint data and vehicle speed”. Col. 7, lines 50-52, “a single output node”).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the single forward output taught in Peng with a reasonable expectation of success because it would have targeted safety checks continuously performed that can override and/or modify the user controls i.e. vehicle can be stopped if an object is detected.
Regarding claims 3 and 14, Golov discloses wherein the obtaining of the control data comprises: obtaining steering data corresponding to the path data; and obtaining acceleration data corresponding to the path data (¶0068).
Regarding claim 10, Golov teaches wherein the obtaining of the input data comprises: obtaining the recognition sensor data including either one or both of image data and light detection and ranging (LiDAR) data; and obtaining the state data including any one or any combination of any two or more of speed data, direction data, and acceleration information of an autonomous driving device (¶0068-0069, ¶0071).
Regarding claims 11, 12, and 13, claims 11, 12 and 13 are rejected using the same art and rationale used to reject claim 1.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Golov (US 20210309261 A1), Peng (US 10737717 B2), as applied to claim 1, and further in view of Rosales (US 20210309261 A1).
Regarding claim 2, Golov does not explicitly disclose but, Rosales teaches wherein the outputting of the output data comprises inputting the input data into the ANN model and outputting the output data including a plurality of output elements respectively corresponding to a plurality of prediction timestamps (FIG. 7).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the timestamp predictions taught in Rosales with a reasonable expectation of success because it would have targeted reducing or eliminate driving error related to such maneuvers.
Claims 4-7 and 15, 17, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Golov (US 20210309261 A1), Peng (US 10737717 B2), as applied to claim 1, and further in view of Black (US 20250121849 A1).
Regarding claims 4 and 15, Golov does not explicitly disclose but, Black teaches wherein the outputting of the output data comprises inputting the input data into the ANN model and outputting quaternion data corresponding to the input data (¶0057).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the quaternion data taught in Black with a reasonable expectation of success because it would have targeted reducing the risk associated with future trajectories, and therefore tend to provide guarantees of bounded risk only as the number of samples grows very large.
Regarding claims 5 and 16, Black further teaches wherein the outputting of the output data comprises inputting the input data into the ANN model and outputting dual quaternion data corresponding to the input data (¶0057).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the quaternion data taught in Black with a reasonable expectation of success because it would have targeted reducing the risk associated with future trajectories, and therefore tend to provide guarantees of bounded risk only as the number of samples grows very large.
Regarding claims 6 and 17, Black further teaches wherein the obtaining of the path data and the control data corresponding to the path data comprises: obtaining the path data based on coordinates of dual quaternion elements included in the dual quaternion data; obtaining steering data corresponding to the path data based on a rotation transformation operation between the dual quaternion elements; and obtaining acceleration data corresponding to the path data based on a translation transformation operation between the dual quaternion elements (¶0057).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the quaternion data taught in Black with a reasonable expectation of success because it would have targeted reducing the risk associated with future trajectories, and therefore tend to provide guarantees of bounded risk only as the number of samples grows very large.
Regarding claims 7 and 18, Black further teaches wherein the obtaining of the path data based on the coordinates of the dual quaternion elements comprises obtaining path data between the coordinates of the dual quaternion elements through an interpolation operation (¶0057).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the quaternion data taught in Black with a reasonable expectation of success because it would have targeted reducing the risk associated with future trajectories, and therefore tend to provide guarantees of bounded risk only as the number of samples grows very large.
Claims 8, 9, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Golov (US 20210309261 A1), Peng (US 10737717 B2), as applied to claim 1, and further in view of Choi (US 2021/0286371).
Regarding claims 8 and 19, Golov does not explicitly disclose but, Choi teaches inputting the input data into an encoder and obtaining feature data corresponding to the input data, wherein the outputting of the output data comprises inputting the feature data into the ANN model and outputting the output data corresponding to the feature data (¶0036).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the input to the encoder taught in Choi with a reasonable expectation of success because it would have targeted improving the accuracy of predicted classification for the particular objects.
Regarding claim 9, Choi further teaches wherein the obtaining of the feature data comprises: inputting the recognition sensor data into a first encoder and obtaining first feature data corresponding to the recognition sensor data; and inputting the state data into a second encoder and obtaining second feature data corresponding to the state data (¶0036).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the input to the encoder taught in Choi with a reasonable expectation of success because it would have targeted improving the accuracy of predicted classification for the particular objects.
Regarding claim 20, a processor-implemented method with path generation, the method comprising:
Golov discloses obtaining input data that includes recognition sensor data and state data (¶112, “ The one or more sensors 137 may include a visible light camera, an infrared camera, a LIDAR, RADAR, or sonar system, and/or peripheral sensors”) and state data (¶0068, “one or more sensors 220 configured to collect information regarding operational aspects of autonomous vehicle 200, such as speed, vehicle speed, vehicle acceleration, braking force, braking deceleration, and the like”);
generating, by inputting the input data into an artificial neural network (ANN) model in a single forward process (¶0112, “applies the sensor input to an ANN defined by the model 119 to generate an output that identifies or classifies an event or object captured in the sensor input, … Data from this identification and/or classification can be included in object data used to generated a navigation path for vehicle 113.”) in a single forward process (¶0096, “The vehicle 113 is controlled based on at least one output from the ANN model”), obtaining path data, steering data, and acceleration data (¶0068),
Golov does not explicitly disclose but, Black teaches a plurality of dual quaternion elements each corresponding to a respective timestamp (¶0057, ¶0035); and
by performing respective operations between the dual quaternion elements (¶0057).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the quaternion data taught in Black with a reasonable expectation of success because it would have targeted reducing the risk associated with future trajectories, and therefore tend to provide guarantees of bounded risk only as the number of samples grows very large.
Golov does not explicitly disclose but, Peng teaches outputting output data corresponding to the input data in a single forward process pass of the ANN model (FIG. 5, step 506, “generating a steering angle command using a feedforward artificial neural network (ANN) as a function of the constructed desired waypoint data and vehicle speed”. Col. 7, lines 50-52, “a single output node”); and
obtaining path data comprising waypoints, based on coordinates of elements included in the output data output in the single forward pass of the ANN model (col. 7, lines 50-54, “one node each for receiving the vehicle speed and the relative lateral position and heading with respect to vehicle centered coordinate system at each waypoint”); and
obtaining control data corresponding to the path data, based by performing one or more transformation operations on the output data output in the single forward pass of the ANN model FIG. 5, step 506, “generating a steering angle command using a feedforward artificial neural network (ANN) as a function of the constructed desired waypoint data and vehicle speed”. Col. 7, lines 50-52, “a single output node”).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the autonomous vehicle disclosed in Golov with the single forward output taught in Peng with a reasonable expectation of success because it would have targeted safety checks continuously performed that can override and/or modify the user controls i.e. vehicle can be stopped if an object is detected.
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yangel (US 20220388532 A1) discloses A method of operating a self-driving vehicle is provided. The method includes predicting a trajectory of an agent in a vicinity of the self-driving vehicle. This is done by: receiving sensor data indicative of a current situation in the vicinity of the self-driving vehicle; generating a current situation feature vector based, at least in part, on the sensor data; searching a pre-built bank of library feature vectors to identify a result feature vector from the pre-built bank that is most relevant to the current situation feature vector, wherein each library feature vector in the pre-built bank is associated with an observed trajectory; and predicting the trajectory of the agent based, at least in part, on the result feature vector. The method also includes planning an action of the vehicle based, at least in part, on the predicted agent trajectory, and operating the vehicle according to the planned action (abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REDHWAN K MAWARI whose telephone number is (571)270-1535. The examiner can normally be reached mon-Fri 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at 571-272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REDHWAN K MAWARI/ Primary Examiner, Art Unit 3667