DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to applicant’s communication filed 24 March 2026, in response to the Office Action mailed 29 December 2025. The applicant’s remarks and any amendments to the claims or specification have been considered, with the results that follow.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 23 February 2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 12, 14-17, and 19-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Green (US 10,019,011 – cited in an IDS), in view of Johansson et al. (Vehicle Applications of Controller Area Network, 2005, pgs. 1-25), and further in view of Chae (US 2019/0360446).
As per claim 12, Green teaches a method for operating a motor vehicle [systems/methods for providing autonomous vehicle operation by leveraging a machine-learned yield model (abstract, etc.)], comprising: acquiring current operating data from a vehicle electronic control unit (ECU) during an acquisition phase to obtain archived operating data [the machine-learned yield model can be trained or otherwise configured to receive and process feature data descriptive of objects perceived by the autonomous vehicle and/or the surrounding environment (current operating data) and, in response to receipt of the feature data, provide yield decisions for the autonomous vehicle relative to the objects (col. 3, lines 13-29; etc.) where the motion planning system can provide the selected motion plan to a vehicle controller (the electronic control unit) that controls one or more vehicle controls (e.g., actuators that control gas flow, steering, braking, etc.) to execute the selected motion plan until the next motion plan is generated (col. 5, lines 14-22; etc.); and where receiving and processing the operating data is the acquisition phase]; evaluating the archived operating data during a simulation phase to obtain labeled training data for an artificial intelligence [the machine-learned yield model (an artificial intelligence) can be trained based at least in part on synthesized yield behaviors generated by playing forward or otherwise simulating certain scenarios that are described by log data, where the yield data can be real-world or simulated (col. 10, lines 16-24; etc.); where creating the synthesized data via simulation is the simulation phase]; training the artificial intelligence with the labeled training data during a training phase [the machine-learned yield model (an artificial intelligence) can be trained based at least in part on synthesized yield behaviors generated by playing forward or otherwise simulating certain scenarios that are described by log data, where the yield data can be real-world or simulated (col. 10, lines 8-43; etc.); where using the simulation data to train the model is the training phase]; and activating or deactivating the ECU or a control function of the ECU of the motor vehicle with the artificial intelligence during a prediction phase [the motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the state data provided by the perception system and/or the predicted one or more future locations for the objects (col. 4, lines 42-45; etc.) and, once the optimization planner has identified the optimal motion plan (or some other iterative break occurs), the optimal candidate motion plan can be selected and executed by the autonomous vehicle. For example, the motion planning system can provide the selected motion plan to a vehicle controller (the electronic control unit) that controls one or more vehicle controls (e.g., actuators that control gas flow, steering, braking, etc.) to execute the selected motion plan until the next motion plan is generated (col. 5, lines 14-22; etc.)].
While Green teaches acquiring data from an ECU during an acquisition phase (see above) which can include wireless signals (see, e.g., Green: col. 23, lines 24-29; col. 24, lines 23-33; col. 27, lines 24-36; etc.), it has not been relied upon for teaching acquiring current operating data of a vehicle including controller area network (CAN) signals. Furthermore, while Green teaches using labeled training data for controlling the ECU (see, e.g., Green: col. 10, lines 8-54; etc.), it has not been relied upon for teaching wherein the labeled training data include binary control signals that activate or deactivate the ECU or a control function of the ECU; [and] activating or deactivating the ECU or a control function of the ECU of the motor vehicle with one of the binary control signals output from the artificial intelligence during a prediction phase.
Johansson teaches acquiring current operating data of a vehicle including controller area network (CAN) signals from a vehicle electronic control unit (ECU) [a controller area network (CAN) can be used in a vehicle to collect data from a large variety of network embedded control systems, including an electronic control unit (ECU) that controls the engine, turbo, fan, etc. (pg. 1, section 1; pg.4, section 2; etc.)].
Green and Johansson are analogous art, as they are within the same field of endeavor, namely vehicle controls using collected sensor data.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to collect vehicle data from an ECU using CAN signals, as taught by Johansson, for the acquiring data from controllers and sensors in the vehicle control system taught by Green.
Johansson provides motivation as [The Controller Area Network (CAN) is a serial bus communications protocol developed by Bosch in the early 1980s. It defines a standard for efficient and reliable communication between sensor, actuator, controller, and other nodes in real-time applications. CAN is the de facto standard in a large variety of networked embedded control systems. The early CAN development was mainly supported by the vehicle industry: CAN is found in a variety of passenger cars, trucks, boats, spacecraft, and other types of vehicles. The protocol is also widely used today in industrial automation and other areas of networked embedded control, with applications in diverse products. Combining networks and mechatronic modules makes it possible to reduce both the cabling and the number. of connectors, which facilitates production and increases reliability. Introducing networks in vehicles also makes it possible to more efficiently carry out diagnostics and to coordinate the operation of the separate subsystems. The CAN protocol standardizes the physical and data link layers, which are the two lowest layers of the open systems interconnect (OSI) communication model (pgs. 1-2, section 1; etc.)].
Chae teaches wherein the labeled training data include binary control signals that activate or deactivate the ECU or a control function of the ECU; [and] activating or deactivating the ECU or a control function of the ECU of the motor vehicle with one of the binary control signals output from the artificial intelligence during a prediction phase [an artificial intelligence apparatus may be used to control an auto stop function by activating or deactivating the auto stop function (abstract, etc.) via a prediction from a trained machine learning model (paras. 0030-32, etc.), where the artificial intelligence is trained with training data that includes control mode labels (paras. 0010, 0057-62, 0262, etc.), where the processor (executing the artificial intelligence) transmits the (predicted) control signal to the ECU to activate or deactivate the auto stop system (paras. 0276, 0280, etc.); where the activate/deactivate control signal for the auto stop is the binary control signal activating/deactivating a control function (auto stop) of the ECU].
Green/Johansson and Chae are analogous art, as they are within the same field of endeavor, namely using training AI/ML models to make predictions and control ECU functions.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the prediction from the AI for activating/deactivating control functions in the ECU, based upon labeled training data for training the model to do so, as taught by Chae, in the labeled training data used to train the AI to control the functions of the ECU in the system taught by Green/Johansson.
Chae provides motivation as [by using AI to control functions of the ECU, including auto stop, fuel efficiency and safety can be increased over systems that do not use the predictive modeling to control these functions (paras. 0003-4)].
As per claim 14, Green/Johansson/Chae teaches wherein the archived operating data include archived vehicle data and/or archived global positioning system (GPS) data, and the archived operating data are additionally used during the training phase [supervised training techniques can be performed to train the model to determine a yield decision based at least in part on the feature(s) associated with an object. For example, the machine-learned yield model can be trained based at least in part on log data annotated with yield labels (Green: col. 10, lines 8-54; etc.)].
As per claim 15, Green/Johansson/Chae teaches wherein a recurrent neural network is used as the artificial intelligence [For example, the machine-learned model can be or can otherwise include one or more various model(s) such as, for example, decision tree-based models (e.g., random forest models such as boosted random forest classifiers), neural networks (e.g., deep neural networks), or other multi-layer non-linear models. Neural networks can include recurrent neural networks (e.g., long short-term memory recurrent neural networks), feed-forward neural networks, convolutional neural networks, and/or other forms of neural networks (Green: col. 19, lines 1-11; etc.)].
As per claim 16, see the rejection of claim 12, above, wherein Green/Johansson/Chae also teaches a non-transitory computer-readable storage medium having stored thereon computer-executable instructions configured to cause a processor to perform operations to: [perform the method] [The autonomy computing system 102 includes one or more processors 112 and a memory 114. The one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause autonomy computing system 102 to perform operations (Green: col. 12, lines 47-59; fig. 1; etc.)].
As per claim 17, see the rejection of claim 12, above, wherein Green/Johansson/Chae also teaches a system for operating a motor vehicle comprising a processor and a memory, the memory storing instructions executable by the processor, the instructions including instructions to: [perform the method] [The autonomy computing system 102 includes one or more processors 112 and a memory 114. The one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause autonomy computing system 102 to perform operations (Green: col. 12, lines 47-59; fig. 1; etc.)].
As per claim 19, see the rejection of claim 14, above.
As per claim 20, see the rejection of claim 15, above.
As per claim 21, Green/Johansson/Chae teaches the system of claim 17, further including the ECU [the autonomous vehicle includes the autonomy computing system, which includes the processor, memory, etc., as well as the vehicle controller and controls (Green: fig. 1; etc.)].
As per claim 22, Green/Johansson/Chae teaches a motor vehicle including the system of claim 21 [the autonomous vehicle includes the autonomy computing system, which includes the processor, memory, etc., as well as the vehicle controller and controls (Green: fig. 1; etc.)].
Claim(s) 13 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Green (US 10,019,011 – cited in an IDS), in view of Johansson et al. (Vehicle Applications of Controller Area Network, 2005, pgs. 1-25), further in view of Chae (US 2019/0360446), and further in view of Langner et al. (Estimating the Uniqueness of Test Scenarios derived from Recorded Real-World-Driving-Data using Autoencoders, June 2018, pgs. 1860-1866).
As per claim 13, Green/Johansson/Chae teaches the method of claim 12, as described above.
While Green/Johansson/Chae teaches performing simulation to evaluate the acquired data and produce labeled training data (see above), it has not been relied upon for teaching wherein a simulation is carried out in an X in the loop (XiL) environment during the simulation phase.
Langner teaches wherein a simulation is carried out in an X in the loop (XiL) environment during the simulation phase [simulation-based approaches are an essential component of today’s validation and verification (V&V) strategies in automotive feature development. One approach is X-in-the-loop (XiL) testing, in which the simulation environment is reused throughout all test phases (pg. 1861, fourth paragraph; etc.)].
Green/Johansson and Langner are analogous art, as they are within the same field of endeavor, namely training neural networks for autonomous automotive control systems.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to utilize an XiL environment during the simulation for validation/verification of the model data, as taught by Langner, for the simulation phase producing the labeled training data in the system taught by Green/Johansson/Chae.
Langner provides motivation as [Therefore, simulation-based approaches are an essential component of today’s V&V strategies in automotive feature development. The cost-effective and reproducible nature of the simulation complements real-world-test-drives and prototypes perfectly. However, simulations must be carefully parametrized. In addition to extensive models for the system environment, the vehicle, the driver and other components, all driving scenarios must also be provided. This results in large manual efforts to create the simulation environment. One approach is X-in-the-loop (XiL) testing, in which the simulation environment is reused throughout all test phases. This not only reduces modeling efforts, but also guarantees consistent simulation models throughout the entire development process (pg. 1861, fourth paragraph; etc.)].
As per claim 18, see the rejection of claim 13, above.
Response to Arguments
Applicant’s arguments, see the remarks, filed 23 February 2026, with respect to the rejection(s) of claim(s) 12-22 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chae, which has been relied upon for teaching using labeled training data including a control signal to activate/deactivate certain functions of an ECU, to train an AI model to provide the activation/deactivation signals to the ECU.
Conclusion
The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): claims 1-11 are cancelled; claims 12-22 are rejected.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Stefan (US 2017/0132118) – discloses a method/system for testing autonomous vehicle software, including using XiL devices as a prepatory validation environment.
Szalay et al. (Next Generation X-in-the-Loop Validation Methodology for Automated Vehicle Systems, 24 Feb 2021, pgs. 35616-35632) – discloses systems/methods for validation of automated vehicle systems using XiL simulation.
Tibba et al. (Testing Automotive Embedded Systems under X-in-the-Loop Setups, Jan 2017, pgs. 1-8) – discloses systems/methods for testing of automated vehicle systems using XiL simulation.
Geerlings (US 2015/0302734) – discloses a cloud system including a trainable transceiver that is trained to provide an activation control signal to an ECU.
Jagbrant (US 2021/0163031) – discloses a system for training a classifier (or multiple classifiers) to provide a power mode control signal to activate/deactivate a high (or low) power mode.
Hashimoto (US 2021/0312198) – discloses training data for training a ML model that includes labeled OFF states for a turn signal controller.
The examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE GIROUX whose telephone number is (571)272-9769. The examiner can normally be reached M-F 10am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEORGE GIROUX/Primary Examiner, Art Unit 2128