DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11 December 2025 [hereinafter Response] has been entered, where:
Claims 1, 11, and 16 have been amended.
Claims 2-7, 10, 12, 13, 15, 17, 18, 20, and 21 have been cancelled.
Claims 1, 8, 9, 11, 14, 16, 19, 22, and 23 are pending.
Claims 1, 8, 9, 11, 14, 16, 19, 22, and 23 are rejected.
Claim Rejections – 35 U.S.C. § 112
3. Claims 1, 8, 9, 11, 14, 16, 19, 22, and 23 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 1, line 11, recites “the automative task.” There is insufficient antecedent basis for this limitation in the claim.
Claim 1, line 12, recites “the automotive entry.” There is insufficient antecedent basis for this limitation in the claim.
Claim 11, line 9, recites “the automative task.” There is insufficient antecedent basis for this limitation in the claim.
Claim 11, line 10, recites “the automotive entry.” There is insufficient antecedent basis for this limitation in the claim.
Claim 16, line 13, recites “the automative task,” but appears should instead read –the automotive task--.
Claim 16, line 14, recites “the automotive entry.” There is insufficient antecedent basis for this limitation in the claim.
Claims 8 and 9 depend directly or indirectly from claim 1. Claim 14 depends from claim 11. Claims 19, 22, and 23 depend directly or indirectly from claim 16. Claims 8, 9, 14, 19, 22, and 23 are rejected as depending from a rejected claim; further, the claims fail to cure the deficiencies of claims 1, 11, and 16, respectively.
Claim Rejections – 35 U.S.C. § 103
4. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
7. Claims 1, 8, 11, 14, 16, 19, 22, and 23 are rejected under 35 U.S.C. § 103 as being unpatentable over US Published Application 20200351283 to Salunke et al. [hereinafter Salunke] in view of Tepić et al., “AutoSec: Multidimensional Timing-Based Anomaly Detection for Automotive Cybersecurity,” IEEE (2020) [hereinafter Tepić].
Regarding claims 1, 11, and 16, Salunke teaches [a] non-transitory computer-readable medium comprising memory with instructions encoded thereon, the instructions, when executed, causing one or more processors to perform operations (Salunke ¶ 0200 teaches “a non-transitory computer readable storage medium comprises instructions which, when executed by one or more hardware processors, causes performance of any of the operations described herein and/or recited in any of the claims. [(that is, non-transitory computer-readable medium comprising memory)]”) of claim 1, [a] method (Salunke ¶ 0002 teaches “anomaly detection systems and methods”) of claim 11, and a system comprising memory with instructions encoded thereon; and one or more processors, that when executing the instructions, are caused to perform (see above Salunke ¶¶ 0002, 0200) of claim 16, the instructions comprising instructions to:
receive entity function data (Salunke, Fig. 1, teaches “an example anomaly detection system [Examiner annotations in dashed-line text boxes]:”
PNG
media_image1.png
603
616
media_image1.png
Greyscale
Salunke ¶ 0047 teaches “systems described herein capture time series signals from multiple entities of an application”), the entity function data indicating a time-based metric (Salunke, Abstract, teaches “detects a first set of anomalies in a first time series that tracks a first metric and assigns a different respective range of time to each anomaly [(that is, ”indicating a time-based metric . . . of a function performed by an entity)]”) . . . and a type of a function performed by an . . . entity (Salunke ¶ 0055 teaches “agents 114 a-j may be configured to capture topology metadata that identifies relationships between different targets. For instance, the topology metadata may identify functional dependencies between different targets. . . . Topology metadata may capture such information, including metadata that identifies each individual resource that is deployed, the respective type of resource, and the respective functional dependencies of the resource [(that is, each “time series” is a type of function performed by an . . . entity)]”),
the time-based metric measuring the performance of the function by the entity with respect to a time measurement (Salunke ¶ 0044 teaches “an anomaly region may correspond to different metrics exhibiting different combinations of anomalous high values, anomalous low values, and/or values within an expected range [(that is, time-based metrics measuring the performance of the function)]”; with regard to time measurement, Salunke ¶ 0057 teaches “[d]ata collector 120 may collect or generate timestamps for sample values in a time series. A timestamp for a sample value indicates the date and time at which the sample value was measured or otherwise observed. For example, CPU performance on a target host that is sampled every five minutes may have a sequence of timestamps as follows for the collected samples: August 16, 11:50 p.m., August 16, 11:55 p.m., August 17, 12:00 a.m., and August 17, 12:05 a.m. [(that is, “metrics exhibiting . . . values” is the time-based metric measuring the performance of the function by the entity with respect to a time measurement)]”),
the type categorizing the function among a plurality of . . . functions performed by the . . . entity (Salunke ¶ 0104 teaches “[t]he training process may form the feature vectors by fetching and extracting values for features used to determine similarity [(that is, “similarity” is categorizing the function among a plurality of . . . functions)]. Example features may include, but are not limited to: entity type indicating the type of software or hardware resource on which the anomalous behavior occurred (e.g., database host, middleware, load balancer, web server, etc.); metric identifier indicating what metric was exhibiting the anomalous behavior (e.g., active sessions, memory performance, CPU utilization, page hits per minute, I/O throughput, etc.); lifecycle type (e.g., archival, test, production, etc.); and/or time (e.g., hour of day, day of the week), when the anomaly occurred {(that is, “entity type,” “metric identifier,” “lifecycle type” is categorizing the function among a plurality of functions performed by the . . . entity)]”);
provide the entity function data including both the time-based metric . . . and the type of the function performed by the . . . entry into a supervised machine learning model (Salunke ¶ 0061 teaches “[e]valuation analytic 132 evaluates incoming data [(that is, provide the entity function data)] provided by data collector 120 against models trained by training analytic 131 to monitor targets 112 a-j for anomalous behavior”; Salunke ¶ 0035 teaches “[a] supervised machine learning process may then train the model by inferring a function from the labeled training data. The trained model may be used to classify new examples as anomalous or unanomalous; further, Salunke ¶ 0104 teaches that in “the machine-assisted supervised mode, the training process generates a set of feature vectors for data points labeled as anomalous (operation 502). . . . Example features may include, but are not limited to: entity type indicating the type of software or hardware resource [(that is, “entity type” is the type of the function)] on which the anomalous behavior occurred (e.g., database host, middleware, load balancer, web server, etc.); metric identifier [(that is, the time-based metric)] indicating what metric was exhibiting the anomalous behavior (e.g., active sessions, memory performance, CPU utilization, page hits per minute, I/O throughput, etc.) [(that is, the “incoming data” and “trained model” is to provide the entity function data including both the time-based metric . . . and the type of the function performed by the . . . entry into a supervised machine learning model into a supervised machine learning model)]”),
the supervised machine learning model trained using two or more labeled clusters (Salunke ¶ 0104 teaches “In the machine-assisted supervised mode, the training process generates a set of feature vectors for data points labeled as anomalous”; Salunke ¶ 0110 teaches “[o]nce the clusters have been formed, the training process identifies a cluster having at least one data point with a user-set label (operation 506). For example, the training process may identify a cluster having a data point explicitly labeled as anomalous or unanomalous by the user [(that is, “labeled anomalous or unanomalous” is the supervised machine learning model trained using two or more labeled clusters)]”) to apply a label to the entity function data (Salunke ¶ 0130 teaches that “[o]nce trained, an anomaly detection model may be used to make predictions against new data to monitor for anomalies [(that is, to apply a label to the entity function data)]”),
wherein a labeled cluster of the two or more labeled clusters is identified as an outlier cluster (Salunke ¶ 0110 teaches “[o]nce the clusters have been formed, the training process identifies a cluster having at least one data point with a user-set label (operation 506). For example, the training process may identify a cluster having a data point explicitly labeled as anomalous [(that is, “identifies a cluster” as an outlier cluster)] or unanomalous by the user [(that is, wherein a labeled cluster of the two or more labeled clusters is identified as an outlier cluster)]”) . . . , and
wherein the label indicates a classification of the entity function data into one of two or more clusters including the outlier cluster (Salunke ¶ 0061 teaches “evaluation analytic 132 may output a set of data that indicates which sample data points within a given time series are anomalous and/or which sample data points are un-anomalous”; Salunke ¶ 0133 teaches “the evaluation process may use the trained classifiers to map incoming data points to a positive class or a negative class [(that is, “negative class“ is the outlier cluster)]. Further, the evaluation process may determine the anomaly boundary region(s) where the incoming data points fall [(that is, wherein the label indicates a classification of the entity function data into one of two or more clusters including the outlier cluster)]”); and
transmit a notification of the classification to a client device (Salunke ¶ 0065 teaches “[p]resentation engine 135 may render user interface elements and receive input via user interface elements. Examples of interfaces include a GUI, a command line interface (CLI), a haptic interface, a voice command interface, and an API [(that is, transmit a notification of the classification to the client device)]”).
Though Salunke teaches the a system identifies a plurality of time series that track different metrics over time for a set of one or more computing resources; Salunke, however, does not explicitly teach that the entity is an “automotive entity,” and that the plurality of functions are a “plurality of automotive functions. Also, Salunke does not explicitly teach –
* * *
[receive entity function data . . . indicating a time-based metric] in connection with performance of an automotive task and a type of function performed by an automotive entity, . . . [the type categorizing the function] among a plurality of automotive functions performed by the automotive entity;
[provide the entity function data into a supervised machine learning model . . . , wherein a labeled cluster of the two or more labeled clusters is identified as an outlier cluster] reflecting outlier performance of the automotive task, . . . ; and
* * *
But Tepić teaches that the entity is an “automotive entity,” (Tepić, left column of p. 1, “I. Introduction,” first paragraph, teaches the implementation by the automotive industry, which is an automotive entity) and that the plurality of functions are a “plurality of automotive functions (Tepić, left column of p. 3, “A. System Architecture,” second paragraph, teaches to collect the various timing parameters of the legitimate [automotive] code. The collected traces are then used as an input to a clustering algorithm which generates a number of clusters, thus forming the timing model [(that is, “collect timing codes” and “generate a timing model” are a plurality of automotive functions)]). Also, Tepić teaches –
* * *
[receive entity function data . . . indicating a time-based metric] in connection with performance of an automotive task (Tepić, Fig. 2, teaches a tracing algorithm of legitimate automotive code [Examiner annotations in dashed-line text boxes]:
PNG
media_image2.png
440
520
media_image2.png
Greyscale
Tepić, left column of p. 2, “I. Introduction,” first paragraph, teaches “we introduce AutoSec, a multidimensional anomaly detection algorithm which relies on observing the timing parameters of the real-time software components running on various electronic control units (ECUs) [(that is, “observing the timing parameters on ECUs” is [receiving entity function data . . . indicating a time-based metric] in connection with performance of an automotive task)]”) . . . ;
[provide the entity function data into a supervised machine learning model . . . , wherein a labeled cluster of the two or more labeled clusters is identified as an outlier cluster] reflecting outlier performance of the automotive task (Tepić, Fig. 3, teaches representing abnormal execution timings [Examiner annotations in dashed-line text boxes]:
PNG
media_image3.png
326
480
media_image3.png
Greyscale
Tepić, right column of p. 3, “A. System Architecture,” first paragraph, teaches “Figure 3 illustrates an example of such a timing model where it visualizes the various clusters. In this example, the executions – marked in black dots-located inside the boundaries of a certain cluster represents a class of legitimate code with normal behavior. Whereas, the black dots - surrounded by red circles in Figure 3 – represents the abnormal executions which have been annotated as anomalies [(that is, reflecting outlier performance of the automotive task)]), . . . ; and
* * *
Salunke and Tepić are from the same or similar field of endeavor. Salunke teaches anomaly analysis of time-series metrics. Tepić teaches a host-based anomaly detection algorithm which relies on observing four timing parameters of the executed software components to accurately detect malicious behavior on the operating system level. Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Salunke pertaining to anomaly analysis of time-series metrics with the task of detecting anomalistic executions via collected timing traces over an automotive CAN bus as a clustering problem of Tepić.
The motivation to do so is because, in autonomous driving and driver assistance applications, it is “highly crucial to identify any misbehavior of the software components which might occur owing to either [remotely-launched attack] threats or even software/hardware malfunctioning.” (Tepić, Abstract).
Regarding claim 8 and 19, the combination of Salunke and Tepić teaches all of the limitations of claims 1 and 16, respectively, as described above in detail.
Salunke teaches -
further comprising:
in response to receiving user input rejecting the classification, re-training the supervised machine learning model (Salunke ¶ 0120 teaches “[a]s the user is reviewing other data points assigned to the new cluster, the training process may recommend the same user-set label. If the user again specifies a different label then what is recommend (that is, “specifies a different label than what is recommended” is in response to receiving user input rejecting the classification)], the process may again re-cluster to account for the user feedback in an attempt to incorporate the newly learned patterns and facilitate the labeling process [(that is, “re-cluster” is re-training the supervised machine learning model )]”) to reduce a corresponding likelihood that a function monitor field is generated on a function monitor (Salunke ¶ 0065 teaches “[p]resentation engine 135 may render user interface elements and receive input via user interface elements. Examples of interfaces include a GUI, a command line interface (CLI), a haptic interface, a voice command interface, and an API”; Salunke ¶ 0067 & Fig. 1 teaches “[c]lients 150 a-k represent one or more clients that may access anomaly management services 130 to generate, view, and navigate summaries. Additionally or alternatively, clients 150 a-k may invoke responsive actions and/or configure automated triggers via the interfaces described herein [(that is, a function monitor field)]”; Salunke ¶¶ 0176-77 teaches “[m]icroservices may be managed and updated separately, written in different languages, and be executed independently from other microservices. Microservices provide flexibility in managing and building applications. . . . Microservices may provide monitoring services that notify a microservices manager (such as If-This-Then-That (IFTTT), Zapier, or Oracle Self-Service Automation (OSSA)) when trigger events from a set of trigger events exposed to the microservices manager occur [(that is, reduce a corresponding likelihood that a function monitor field is generated on a function monitor)]”).
Regarding claim 14, the combination of Salunke and Tepić teaches all of the limitations of claim 11, as described above in detail.
Salunke teaches -
further comprising, in response to receiving a user input comprising a rejection of the classification (Salunke ¶ 0120 teaches “[a]s the user is reviewing other data points assigned to the new cluster, the training process may recommend the same user-set label. If the user again specifies a different label then what is recommend (that is, “specifies a different label than what is recommended” is in response to receiving user input rejecting the classification)], the process may again re-cluster to account for the user feedback in an attempt to incorporate the newly learned patterns and facilitate the labeling process [(that is, “re-cluster” is re-training the supervised machine learning model )]”), weakening an association between the label and the entity function data by:
reducing a likelihood that the entity function data will be classified into the outlier cluster (Salunke ¶ 0120 teaches “[a]s the user is reviewing other data points assigned to the new cluster, the training process may recommend the same user-set label. If the user again specifies a different label then what is recommend, the process may again re-cluster to account for the user feedback in an attempt to incorporate the newly learned patterns and facilitate the labeling process [(that is, “re-cluster” is reducing a likelihood that the entity function data will be classified into the outlier cluster)]”); and
reducing a corresponding likelihood that a function monitor field will be generated on a function monitor (Salunke ¶ 0065 teaches “[p]resentation engine 135 may render user interface elements and receive input via user interface elements. Examples of interfaces include a GUI, a command line interface (CLI), a haptic interface, a voice command interface, and an API”; Salunke ¶ 0067 & Fig. 1 teaches “[c]lients 150 a-k represent one or more clients that may access anomaly management services 130 to generate, view, and navigate summaries. Additionally or alternatively, clients 150 a-k may invoke responsive actions and/or configure automated triggers via the interfaces described herein [(that is, “invoke responsive actions” is a function monitor field)]”; Salunke ¶¶ 0176-77 teaches “[m]icroservices may be managed and updated separately, written in different languages, and be executed independently from other microservices. Microservices provide flexibility in managing and building applications. . . . Microservices may provide monitoring services that notify a microservices manager (such as If-This-Then-That (IFTTT), Zapier, or Oracle Self-Service Automation (OSSA)) when trigger events from a set of trigger events exposed to the microservices manager occur [(that is, reducing a corresponding likelihood that a function monitor field will be generated on a function monitor)]”).
Regarding claim 22, the combination of Salunke and Tepić teaches all of the limitations of claim 1, as is described above in detail.
Salunke teaches -
wherein the instructions further comprise instructions to
train an unsupervised machine learning model to identify outliers in training data including metric and type information (Salunke ¶ 0036 teaches “[w]hen operating in an unsupervised manner, the anomaly detection system may train a model with little to no input from a user. In particular, the anomaly detection system may train the anomaly detection model without receiving any user-set labels for the training data. The unsupervised stage may instead train an anomaly detection model as a function of labels that have been automatically assigned. Operating in an unsupervised manner allows the system to provide nearly instantaneous anomaly detection which is generally not achievable with a purely supervised approach [(that is, train an unsupervised machine learning model to identify outliers in training data including metric and type information)]”).
Regarding claim 23, the combination of Salunke and Tepić teaches all of the limitations of claim 1, as described above in detail.
Salunke teaches -
wherein the instructions further comprise instructions to
retrain the supervised machine learning model based on a user input comprising one of an affirmation or a rejection of the classification (Salunke ¶ 0120 teaches “[a]s the user is reviewing other data points assigned to the new cluster, the training process may recommend the same user-set label. If the user again specifies a different label then what is recommend (that is, “specifies a different label than what is recommended” is based on a user input comprising one of an affirmation or a rejection of the classification)], the process may again re-cluster [(that is, retrain the supervised machine learning model)] to account for the user feedback in an attempt to incorporate the newly learned patterns and facilitate the labeling process [(that is, “re-cluster” is retrain the supervised machine learning model )]”).
8. Claim 9 is rejected under 35 U.S.C. § 103 as being unpatentable over US Published Application 20200351283 to Salunke et al. [hereinafter Salunke] in view of in view of Tepić et al., “AutoSec: Multidimensional Timing-Based Anomaly Detection for Automotive Cybersecurity,” IEEE (2020) [hereinafter Tepić], and US Patent 6597777 to Ho [hereinafter Ho].
Regarding claim 9, the combination of Salunke and Tepić teaches all of the limitations of claim 1, as described above in detail.
Though Salunke and Tepić teach a system that identifies a plurality of time series that track different metrics over time for a set of one or more computing resources in an automotive environment; the combination of Salunke and Tepić, however, does not explicitly teach –
wherein the time-based metric comprises one or more of a difference between current entity function data and historical entity function data of the type or forecasted value of the entity function data
But Ho teaches -
wherein the time-based metric comprises one or more of a difference between current entity function data and historical entity function data of the type or forecasted value of the entity function data (Ho 4:24-32 teaches the “invention can be applied to detect network/service anomalies on any type of network from which an objective function can be generated from current observable real-time network performance data [(that is, current entity function data)], and which objective function can then be compared with that objective function as generated from historical network performance data [(that is, historical entity function data)], to detect an anomaly in the objective function generated from current performance data [(that is, one or more of a difference between current entity function data and historical entity function data of the type or forecasted value of the entity function data)]”).
Salunke, Tepić, and Ho are from the same or similar field of endeavor. Salunke teaches anomaly analysis of time-series metrics. Tepić teaches a host-based anomaly detection algorithm which relies on observing four timing parameters of the executed software components to accurately detect malicious behavior on the operating system level. Ho teaches measure and analyze network performance in real time from which an anomaly can be detected before an actual failure occurs so that corrective actions can be executed in time to avert failures. Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Salunke pertaining to anomaly analysis of time-series metrics based on a task of detecting anomalistic executions via collected timing traces over an automotive CAN bus with the historical thresholds with comparison analysis of real-time network performance data of Ho.
The motivation to do so is for “proactive and automatic detection of network failures and performance degradations is achieved by first converting real-time network performance data into a performance-based objective function.” (Ho 2:62-66).
Response to Arguments
9. Examiner has fully considered Applicant’s arguments, and responds below, accordingly.
Claim Rejections -35 U.S.C. § 103
10. Applicant submits that “Independent claims 1, 11, and 16 are amended to clarify,
* * *
[(a) receive entity function data, the entity function data indicating a time-based metric in connection with performance of an automotive task and a type of a function performed by an automotive entity,
[(a.1)] the time-based metric measuring the performance of the function by the entity with respect to a time measurement,
[(a.2)] the type categorizing the function among a plurality of automotive functions performed by the automotive entity;]
[(b)] providing the entity function data including both the time-based metric in connection with performance of the automative task and the type of the function performed by the automotive entry into a supervised machine learning model trained using two or more labeled clusters to apply a label to the entity function data,
[(b.1)] the supervised machine learning model trained using two or more labeled clusters to apply a label to the entity function data,
[(b.2)] wherein a labeled cluster of the two or more labeled clusters is identified as an outlier cluster reflecting outlier performance of the automotive task, and
[(b.3)] wherein the label indicates a classification of the entity function data into one of two or more clusters including the outlier cluster; and
* * *
(claim 1). The Office action concedes that Salunke is silent on a time-based metric in connection with performance of an automotive task, and is also silent on determining a type of a function performed by an automotive entity. This in turn means that Salunke could not provide such data as input into a supervised machine learning model.
Tepic is relied upon to cure these deficiencies; however, Tepic is silent on usage of a supervised machine learning model. Rather, Tepic uses various unsupervised machine learning models to cluster data for outlier determination. Tepic does discuss inputting timing parameters, as well as clusters, into an "AutoSec algorithm; however, there is no input of a "type of a function performed by the automotive entity" as claimed into the AutoSec algorithm, and the AutoSec algorithm is not a supervised machine learning model.” See e.g., Tepic, section V.
Accordingly, independent claims 1, 11, and 16 are patentable, as are the remaining claims at least by virtue of their dependencies. Reconsideration and withdrawal of the rejection is therefore respectfully requested.” (Response at pp. 7-8 (emphasis added by Examiner)).
Examiner’s Response:
Examiner respectfully disagrees because Salunke generally provides that “[t]echniques are disclosed for summarizing, diagnosing , and correcting the cause of anomalous behavior in computing systems. In some embodiments, a system identifies a plurality of time series that track different metrics over time for a set of one or more computing resources. (Salunke, Abstract).
Applicant’s claims recite, inter alia, having a “supervised machine learning model trained using two or more labeled clusters to apply a label to the entity function data, wherein a labeled cluster . . . is identified as an outlier cluster reflecting outlier performance of the automotive task . . . .” (see claim1, lines 12-18). The plain meaning of an “outlier” is “an empirical value within data collected from an enterprise or a predictive value derived from the collected data that deviates from an expected value or range of values.” (Specification ¶ 0025). The plain meaning of “entity functions” are “operations performed by entities.” (Specification ¶ 0025). Accordingly, the broadest reasonable interpretations of the claim terms “outlier” and “entity functions,” cover the teachings of Salunke, which are not inconsistent with the Applicant’s disclosure. (MPEP § 2111).
As noted above, Salunke does not provide a context relating to an “automotive entity” or “automotive task.”
In this regard, the teachings of Tepić are relied upon as teaching the “automotive entity” or “automotive task” context as set out above in detail.
The plain meaning of “automotive task” may include vehicle and vehicle component identification as well as aspects of an automotive supply or repair chain (Specification ¶ 0042 (“automotive supplier whose entity functions include a number of requests”); see also Specification ¶ 0044). The broadest reasonable interpretation of the terms “automotive entity” and “automotive task” covers the teachings of Tepić relating to “legitimate software components—defined by the car manufactures during the design time—may exhibit abnormal behavior due to software/hardware malfunctioning” including vehicle ECU and CAN bus architectures by way of example, and is not inconsistent with the Applicant’s disclosure. (MPEP § 2111).
With regard to Applicant’s argument that “Tepić is silent on usage of a supervised machine learning model. Rather, Tepić uses various unsupervised machine learning models to cluster data for outlier determination,” and that “there is no input of a ‘type of a function performed by the automotive entity’ as claimed into the AutoSec algorithm.” (Response at pp. 7-8).
But Tepić does teach that “[o]ur system consists of a number of ECUs [(that is, automotive entity)] which are computational units in charge of specific control functions, such as engine control, suspension control, and driver assistance [(that is, these are each a function performed by the automotive entity)]. Such ECUs are connected with each other through CAN buses.” (Tepić, left column at p. 3, “A. System Architecture,” first paragraph).
Also, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. Where a rejection of a claim is based on two or more references, a reply that is limited to what a subset of the applied references teaches or fails to teach, or that fails to address the combined teaching of the applied references may be considered to be an argument that attacks the reference(s) individually, as is the case here with the cited prior art of Tepić. (MPEP § 2145.IV).
The rejections hereinabove clearly sets forth which claim limitations are taught by each of the prior art references, and the reason why it would be obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant's invention to combine their teachings, and Applicant has not explained why the cited prior art references cannot be combined in the manner set forth in the rejection.
Conclusion
11. The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
(US Published Patent Application 20190176639 to Kumar et al.) teaches a degradation state of a battery is predicted based on a rate of convergence of a metric, that is derived from a sensed vehicle operating parameter, towards a defined threshold, determined based on past history of the metric. The predicted state of degradation is then converted into an estimate of time or distance remaining before the component needs to serviced, and displayed to the vehicle operator. Vehicle control and communication strategies may be defined with respect to the predicted state of degradation.
(Serban Vadineanu et al., "Robust and Accurate Period Inference using Regression-Based Techniques," IEEE (2020)) teaches a first period inference framework that uses regression-based machine-learning (RBML) methods trained on automotive task sets, and thoroughly investigates the accuracy and robustness of different families of RBML methods in the presence of uncertainties in the system parameters. We show, on both synthetically generated traces and traces from actual systems, that our solutions can reduce the error of period estimation by two to three orders of magnitudes with respect to the state of the art.
12. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.L.S./
Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122