DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-5, 7-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fox (US 11,650,807 B2) and further in view of Geddes et al. (US 2022/0083450 A1).
As per claim 1, Fox teaches the invention as claimed including, “A method, comprising:
detecting a defect in first code associated with one or more computing devices of a vehicle;”
Fox teaches, receiving at a server remote from the at least one vehicle, Electronic Control Unit (ECU) activity data form the at least one vehicle. Determining, based on the ECU activity data, a software vulnerability affecting the at least one vehicle, the software vulnerability being determined based on a deviation between the received ECU activity data and expected ECU activity data (Column 3, lines 25-39). Also see column 13, lines 56 – column 14, lines 1-2. ECUs throughout a vehicle may be configured to report data regarding their operations and functionality to orchestrator for machine learning and artificial intelligence. Orchestrator may perform algorithms to detect software anomalies, errors, and faults (column 15, lines 58 – column 16, lines 1-2.
“generating a correction to the defect using a first artificial intelligence (AI) model;
obtaining updated code of the first code based at least in part on the correction;
determining that the updated code satisfies one or more criteria; and
transferring the updated code to the vehicle in response to the updated code satisfying the one or more criteria.”
Fox teaches, identifying, at the server, an ECU software update (correction) based on the determined software vulnerability; and sending from the server, a delta file (updated code) configured to update software on the ECU software update (Column 3, lines 25-39). Based on the machine learning or artificial intelligent functions of orchestrator, recommended changes may be suggested or automatically implemented to maintained the health of the vehicle’s ecu. (software update) (column 16, lines 5-14). In some embodiments the machine learning or artificial intelligence functions are performed at a server and may provide recommended changes for entire fleets of vehicles (column 16, lines 14-18). The serve is configured to identify an ECU software update if it is determined that there are software vulnerabilities affecting vehicles. The server may also generate and send a delta file configured to update software on the ECUs of the affected vehicles (Column 18, lines 9-19).
However, Fox does not explicitly appear to teach, “determining that the updated code satisfies one or more criteria;” and “transferring the updated code to the vehicle in response to the updated code satisfying the one or more criteria.”
Geddes et al. teaches, patch generation may include calculation of one or more confidence factors or metrics for each patch. A patch generator may automatically implement some or all patches. All patches with a confidence score greater than a specified threshold (verified) may be automatically applied or a patch with the highest confidence score may be automatically selected (verified) by a patch generator (0040-0042). Patch generator may verify each patch to determine whether the generated patch fixes the buggy code section (0043). Also see figure 7.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fox with Geddes et al. because both teach the generation and deployment of a patch to fix a determined software issue. Geddes et al. teaches verifying the patch prior to deployment. This will help lower the chance of introducing more issues to the system and also make sure that the patch fixing the issue it was intended for. Using this know technique of verifying the patch as taught in Geddes et al. in the system of Fox will improve the similar device of Fox yielding predictable results of making sure the selected/generated patch if of high confidence and corrects the issue it was created for.
As per claim 2, Fox further teaches, “The method of claim 1, wherein:
the one or more computing devices comprises one or more processors;
the first code comprises one or more instructions that, when executed by the one or more processors, cause the vehicle to perform one or more actions; and
the defect comprises an error in the first code that, when executed by the one or more processors, causes the one or more processors to produce an incorrect or unexpected result.”
Fox teaches, modern vehicles utilize many electronic control units (ECUs) to control operations of components such as engines, powertrains, transmissions, brakes, suspensions, onboard entertainment systems, communications systems and the like (column 1, lines 35-41). Determining, based on the ECU activity data, a software vulnerability affecting the at least one vehicle, the software vulnerability affecting the at least one vehicle, the software vulnerability being determined based on a deviation between the received ECU activity data and expected ECU activity data (Column 3, lines 25-39).
As per claim 3, Fox further teaches, “The method of claim 1, wherein detecting the defect comprises obtaining a notification of the defect from one or more sources including an issue tracking system, a log of the vehicle, a report of the vehicle, or a combination thereof.
Fox teaches, receiving at a server remote from the at least one vehicle, Electronic Control Unit (ECU) activity data (log) form the at least one vehicle. Determining, based on the ECU activity data (log), a software vulnerability (notification) affecting the at least one vehicle, the software vulnerability affecting the at least one vehicle, the software vulnerability being determined based on a deviation between the received ECU activity data and expected ECU activity data (Column 3, lines 25-39). Also see column 13, lines 56 – column 14, lines 1-2. ECUs throughout a vehicle may be configured to report data regarding their operations and functionality to orchestrator for machine learning and artificial intelligence. Orchestrator may perform algorithms to detect software anomalies, errors, and faults (notification of defect) (column 15, lines 58 – column 16, lines 1-2.
As per claim 4, Fox further teaches, “The method of claim 1, wherein detecting the defect comprises:
providing, to a second artificial intelligence (AI) model, input data comprising the first code; and
obtaining, from the second AI model, output data comprising an indication of the defect.”
Fox teaches, orchestrator may be configured to access historical data relating to processing activity of ECU. The historical data may represent expected processing activity. Orchestrator may compare the real-time processing activity data with the historical data to identify one or more anomalies in the real-time processing activity. The orchestrator may implement various type of data processing techniques, including machine learning techniques, to identify the anomalies (column 13, lines 55 – column 14, lines 1-2).
As per claim 5, Fox further teaches, identifying, at the server, an ECU software update based on the determined software vulnerability; and sending from the server, a delta file configured to update software on the ECU software update (Column 3, lines 25-39). Based on the machine learning or artificial intelligent functions of orchestrator, recommended changes may be suggested or automatically implemented to maintained the health of the vehicle’s ecu. (software update) (column 16, lines 5-14). In some embodiments the machine learning or artificial intelligence functions are performed at a server and may provide recommended changes for entire fleets of vehicles (column 16, lines 14-18). Server is configured to identify an ECU software update if it is determined that there are software vulnerabilities affecting vehicles. Server may also generate and send a delta file configured to update software on the ECUs of the affected vehicles (Column 18, lines 9-19).
However, Fox does not explicitly appear to teach, “The method of claim 1, further comprises training the first AI model to correct defective code of the vehicle using training code as expected output, wherein the training code is configured to perform one or more vehicle functions.”
Geddes et al. teaches, Patch model generator may be a end-to-end solution which may train, generate, and /or otherwise update patch models based on data received from one or more code repositories (0054). Models may be updated and/or generated at periodic intervals, as new data (training data) is received at the code repository (0055). Also see 0060.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fox with Geddes et al. because both teach the use of machine learning and artificial intelligence in order to determine defects in code/software. Fox teaches the use of machine learning, the examiner states that it would have been inherent to one of ordinary skill in the art that machine learning must be trained in some way to perform its job. such as the detection of software anomalies, errors and faults. Geddes et al. teaches one known method of this training yielding a predictable results of a trained patch model.
As per claim 7, Fox and Geddes et al. further teach, “The method of claim 1, wherein generating the correction comprises:
providing, to the first AI model, input data comprising an indication of the defect in the first code; and
obtaining, from the first AI model, output data comprising an indication of the correction.”
Fox teaches, identifying, at the server, an ECU software update based on the determined software vulnerability; and sending from the server, a delta file configured to update software on the ECU software update (Column 3, lines 25-39). Based on the machine learning or artificial intelligent functions of orchestrator, recommended changes may be suggested or automatically implemented to maintained the health of the vehicle’s ecu. (software update) (column 16, lines 5-14). In some embodiments the machine learning or artificial intelligence functions are performed at a server and may provide recommended changes for entire fleets of vehicles (column 16, lines 14-18). Server is configured to identify an ECU software update if it is determined that there are software vulnerabilities affecting vehicles. Server may also generate and send a delta file configured to update software on the ECUs of the affected vehicles (Column 18, lines 9-19).
Geddes et al. teaches, If a bug is identified in the code, the code may be provided to patch generator to automatically generate a patch (0029). Also see 0035-36. Each bug is matched to a patch model based on analysis . Patch generator generates a patch using the identified model. A model may include a code snippet that may replace a code snippet associated with buggy code (0038-0039). Also see 0054, 0060 and 0077-0081. Also see figure 5.
As per claim 8, Geddes et al. further teaches, “The method of claim 1, wherein the one or more criteria comprises:
a functional safety standard including one or more safety integrity levels;
one or more performance indicators; or
a combination thereof.”
Geddes et al. teaches the calculation of one or more confidence factors or metrics for each patch (calculated probability of success). A patch with a score greater than a specified threshold or patch with the greatest confidence score is selected (0040-0042).
As per claim 9, Fox and Geddes et al. further teach, “The method of claim 1, further comprising:
detecting at least one defect in second code associated with the one or more computing devices of the vehicle; and
sending a notification of the at least one defect in response to identifying that the second code is prohibited from defect correction using the first AI model.”
Fox teaches, receiving at a server remote from the at least one vehicle, Electronic Control Unit (ECU) activity data from the at least one vehicle. Determining, based on the ECU activity data, a software vulnerability (notification) affecting the at least one vehicle, the software vulnerability affecting the at least one vehicle, the software vulnerability being determined based on a deviation between the received ECU activity data and expected ECU activity data (Column 3, lines 25-39). Also see column 13, lines 56 – column 14, lines 1-2.
Geddes et al. teaches patch generator may identify buggy code in source code and/or generate a bug report (0031). Geddes et al. further teaches that each bug may be matched to a patch model based on analysis such as a matching metric between code snippets associated with the bug and/or available patch models. Identified or matching models may be retrieved from a resource such as a model database associated with patch generator (0038). Patch generator may generate each patch using one or more of the identified models (0039). Model identification and/or patch generation includes calculation of one or more confidence factors or metrics for each patch or patch model (e.g., a calculated probability of success). Each patch, and/or associated information such as confidence score, code snippet, etc. may be presented for evaluation by an end user such as a software developer. A user may accept and/or reject one or more patches for application based on the confidence score (0040-0041).
As per claim 10, Fox further teaches, “The method of claim 1, wherein the one or more computational devices comprises:
one or more microcontroller units (MCUs);
one or more electronic control units (ECUs);
one or more sensors;
one or more advanced driver-assistance systems (ADAS);
a data communications module;
or any combination thereof.”
Fox teaches, receiving at a server remote from the at least one vehicle, Electronic Control Unit (ECU) activity data from the at least one vehicle. Determining, based on the ECU activity data, a software vulnerability (notification) affecting the at least one vehicle, the software vulnerability affecting the at least one vehicle, the software vulnerability being determined based on a deviation between the received ECU activity data and expected ECU activity data (Column 3, lines 25-39). Also see column 13, lines 56 – column 14, lines 1-2. ECUs throughout a vehicle may be configured to report data regarding their operations and functionality to orchestrator for machine learning and artificial intelligence. Orchestrator may perform algorithms to detect software anomalies, errors, and faults (column 15, lines 58 – column 16, lines 1-2.
As per claims 11-15 and 17-20, claims 11-15 and 17-20 contain similar limitations to claim 1-5 and 7-10 and are therefore rejected for the same reasons.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Fox (US 11,650,807 B2) and Geddes et al. (US 2022/0083450 A1) as applied to claims 5 and 15 above, and further in view of Strenski et al. (US 2025/0103872 A1).
As per claim 6, Geddes et al. further teaches, The patch model generator may train patch models using deep learning by standardizing code, extracting, bugs, and evaluating patches (0060). Also see 0077 – 0081. Also see figure 3 steps 300, 360, 370 and 380. Also see 0054-55.
However Geddes et al. does not explicitly appear to teach, “The method of claim 5, wherein training the first AI model comprises:
evaluating output data of the first AI model based at least in part on the training code; and
adjusting the first AI model based at least in part on the evaluated output data, wherein the one or more vehicle functions satisfy a functional safety standard.”
Strenski et al. teaches, A trained DNN model can be validated by application test input data to the trained DNN model and the output evaluated for accuracy. Based on the evaluation the weight can be adjusted so to retain and update the DNN model to improve performance (e.g., accuracy in classification) (paragraph 0013).
It would have been obvious to one of ordinary skill in the art before the effective filing date claimed invention to modify Fox and Geddes et al. with Strenski et al. Both Fox and Geddes et al. teach the use of AI/machine learning model to generate a patch dues to a software issue. Geddes et al. teaches steps for training/updating a model. Strenski et al. teaches a method to evaluate a model for accuracy and to modify the model to make it more accurate. It would have been obvious to apply the know method of retraining a model in Strenski et al. to the trained model of Fox and Gedde et al. to improve the results of the model.
As per claim 16, claim 16 contains similar limitations to claim 6 and is rejected for similar reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lin et al. (US 2009/0328002 A1) teaches techniques to analyze and detect software hand program errors and giving suggestions to cures (Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK A GOORAY whose telephone number is (571)270-7805. The examiner can normally be reached Monday - Friday 10:00am - 6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 571-272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK A GOORAY/Examiner, Art Unit 2199
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199