Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Regarding 35 U.S.C. 103 Rejection
Regarding applicants arguments directed at the amended claim limitation:
Examiner agrees that Orlando does not disclose the entirety of the amended claim language:
Examiner notes that Wollny does in fact teach the amended claim limitations:
wherein the dataset is maintained in a centralized (0048; support servers may be centrally located at one location or may be distributed across a plurality of sites for improved reliability, reduced latency, or data residency requirements.) device component repository (database) that tracks,(events logs) for each of the loT devices (of all the Iot devices), ( [0031] At block 152, the one or more servers collect operating data from a plurality of computing devices via a network. The operating data may comprise event logs and other device status data, such as hardware profiles or lists of adjustable operating parameters, which may be received in one or more messages from the computing devices. As described above, part or all of the operating data may be received from a plurality of data collection agents installed in the computing devices. In some embodiments, a database of computing device profiles may be accessed to obtain device profiles (e.g., hardware profiles, software profiles, or license profiles). 0032; In some embodiments, one or more predictive model may be trained for each of a plurality of error conditions or for each of a plurality of device types (e.g., desktop computers, notebook computers, tablets, smartphones, network devices, or IoT devices). [0043] The operating data received by the data collection interface 210 may be sent via a data pipeline 212 to a data warehouse 214 for storage. The operating data may be stored in one or more data stores (e.g., SQL or NoSQL databases) of the data warehouse 214 for further analysis by an analysis and research interface 216. Part or all of the received operating data may additionally be provided directly to the analysis and research interface 216 in order to provide current analytics, such as providing dashboards 220 of current operating status data to service users 226 (e.g., network or device analysts monitoring real-time system performance on an ongoing basis). The analysis and research interface 216 may also obtain operating data from the data warehouse 214, which may be analyzed to perform aspects of the proactive support techniques described herein (e.g., training machine learning models, identifying predictive indicators, generating corrective scripts, or identifying affected computing devices). To perform such proactive support actions, the analysis and research interface 216 may be implemented by one or more proactive support servers (e.g., the proactive support servers 340 of FIG. 3).)
corresponding configuration data that includes: corresponding operating system (OS) information, (mapping above + 0016; Such operating parameters may be associated with hardware or software configurations, applications, settings, operating systems, or versions. Additional, fewer, or alternative aspects may be included in various embodiments, as described herein; 0037; In some embodiments, the affected computing devices may be identified as computing devices running software associated with an error condition (e.g., an operating system or application for which a conditional or unconditional corrective script exists), computing devices having hardware associated with an error condition, or computing devices associated with user accounts identified as having current or potential future error conditions. See also 0042; 0044)
corresponding software version information, (mapping above + 0016; 0034; 0020; Such data collection agents may further be configured to periodically record system operating parameters, such as configuration parameters, settings selected, versions of software installed or running, resource utilization levels, connected devices, or other similar information regarding the current status of the computing device. 0034; The predictive indicators may include device status indicators, such as hardware or software features, configurations, parameters, versions, or settings. Additionally or alternatively, the predictive indicators may include operating data patterns, such as combinations of entries in event logs received from data collection agents of computing devices associated with one or more error conditions.)
and corresponding patch information, (Event log information includes the corrective script information and successful or unsuccessful execution; [0028] At block 118, in some embodiments, the computing device may generate and send a confirmation message to the server to indicate either successful or unsuccessful execution of the corrective script. The confirmation message may include a log file detailing changes made to the computing device and any errors encountered. Additionally or alternatively, the changes implemented by the corrective script may be recorded in the event logs of the one or more data collection agents. If the corrective script is a conditional corrective script, the confirmation message may provide information regarding the trigger condition detected prior to execution of the corrective script. Monitoring of the computing device continues at block 104 during continued operation of the computing device.)
and wherein the centralized device component repository maintains current versioning of the configuration data such that, ([0020] At block 102, one or more data collection agents are installed on the computing device. Each data collection agent is configured to monitor and record information regarding operation of the computing device, such as system and application operations performed or attempted. These operations may be recorded in an event log as events occurring at the computing device, such as user log-in events, application launch events, application close events, error code events, memory access events, network access events, input events, output events, etc. Such data collection agents may further be configured to periodically record system operating parameters, such as configuration parameters, settings selected, versions of software installed or running, resource utilization levels, connected devices, or other similar information regarding the current status of the computing device. [0017] FIGS. 1A-B illustrate flow diagrams of exemplary methods for monitoring operation of computing devices (e.g., endpoint computing devices associated with end users), identifying predictive indicators of error conditions that may affect some of the computing devices, and proactively remediating the operation of the computing devices to avoid the error conditions. The exemplary methods may be implemented in whole or part by processors of computing devices in coordination with processors of remote servers. Thus, each of a plurality of computing devices may implement the exemplary method of FIG. 1A to provide operating data regarding such devices to one or more servers implementing aspects of the exemplary method of FIG. 1B to identify and correct potential issues with at least some of the computing devices. These exemplary methods may be implemented by components of the exemplary computing system 300 illustrated in FIG. 3, described in further detail below. Further embodiments may include additional or alterative actions and may involve alternative configurations or components.)
after any remedial actions are performed on the loT devices, (after running a corrective script and logging that information in the event logs along with any changes see 0027;0028) the dataset in the centralized device component repository is correspondingly updated; ([0020] At block 102, one or more data collection agents are installed on the computing device. Each data collection agent is configured to monitor and record information regarding operation of the computing device, such as system and application operations performed or attempted. These operations may be recorded in an event log as events occurring at the computing device, such as user log-in events, application launch events, application close events, error code events, memory access events, network access events, input events, output events, etc. Such data collection agents may further be configured to periodically record system operating parameters, such as configuration parameters, settings selected, versions of software installed or running, resource utilization levels, connected devices, or other similar information regarding the current status of the computing device. [0021] At block 104, the one or more data collection agents execute on the computing device to monitor the operating status of the computing device. Monitoring the operating status includes detecting events occurring at the computing device, as well as periodically or episodically detecting operating parameters of the computing device. The one or more data collection agents run in the background on the client computing device to obtain such operating data, which may be stored in temporary files or added to one or more log files. [0017] FIGS. 1A-B illustrate flow diagrams of exemplary methods for monitoring operation of computing devices (e.g., endpoint computing devices associated with end users), identifying predictive indicators of error conditions that may affect some of the computing devices, and proactively remediating the operation of the computing devices to avoid the error conditions. The exemplary methods may be implemented in whole or part by processors of computing devices in coordination with processors of remote servers. Thus, each of a plurality of computing devices may implement the exemplary method of FIG. 1A to provide operating data regarding such devices to one or more servers implementing aspects of the exemplary method of FIG. 1B to identify and correct potential issues with at least some of the computing devices. These exemplary methods may be implemented by components of the exemplary computing system 300 illustrated in FIG. 3, described in further detail below. Further embodiments may include additional or alterative actions and may involve alternative configurations or components. [0032] At block 154, the one or more servers generate a device status data set based upon the collected operating data. The device status data set is generated to comprise a plurality of entries for each of the plurality of computing devices, each of which entries may store the collected operating data or derived values generated from the operating data. In some embodiments, additional entries may be added to the device status data based upon information from other data sources, such as information relating to device support actions (e.g., error reports or support tickets from traditional device support management and reporting systems). Generating the device status data set may include combining operating data from multiple data sources, which may be updated at different times. The device status data set may further be generated over time by adding data received at different times, such as by combining periodic updates associated with separate time intervals from data collection agents of the plurality of computing devices. By generating a device status data set based upon operating data from a sufficiently large number of computing devices over a sufficiently long time interval, the device status data set will include both normal operating entries for each of the plurality of computing devices and error condition entries for at least some of the computing devices. Information regarding remediation actions previously taken and results of such actions may also be included, either from the operating data or from additional data sources associated with the computing devices (e.g., device support records).)
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, the claim recites “a group of target variables” and then recites “the target variables”, which lacks antecedent basis in the claim and should read “the group of target variables.”
Claims 2, 5, 10, 11, 12, 15, 20 inherit the same rejection as claims 1 above.
Regarding claims 3-4, 9-10, 13-14, and 19-20, all recite “the model” which lacks antecedent basis in the claims.
Regarding claims 3 and 13 recite “the predicted target values” which lacks antecedent basis in the claims.
Regarding claim 2, the claim recites “multiple parallel branches” then recites “the branches” which lacks antecedent basis in the claims.
Claims 10, 12, and 20 inherit the same rejection as claim 2 above for reciting similar limitations.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 5, 7-9, 11, 15, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orlando et al. (US 20230394140 A1) hereinafter Orlando, in view of Wollny et al. (US 20230161662 A1) hereinafter Wollny.
Regarding claim 1, Orlando teaches a method, comprising: (0026; method)
pre-processing a dataset, (Fig 8; 0085; a new observation is detected, examiner notes the observation is processed to include pieces of information such as threat, elapsed time, and expertise and is not raw data and therefore interpreted to be “pre-processing”) wherein the dataset includes data and metadata (0085; observation data; 0072; the observation data includes feature data such as device type described in the applicants specification as metadata) that indicates a software configuration (0085-0087; encryption level, descriptor, IDE implementation; see also below discussion on firmware configuration) of an internet of things (loT) device, (The system organizes data into feature sets such as those in Figs 7 and 8 which provide some indication about the IoT device architecture type and firmware configurations; 0018; device can be an IoT device; Fig 7; shows CXL threat, an extension threat,; [0056] As shown in FIG. 6A, and by reference number 610, the security threat analysis system 604 may receive security threat information associated with a device that is to be analyzed for security threats. [0057] An asset security threat category or asset-based threat may include a security threat to hardware or software assets relating to an object of interest to an attack, such as data exfiltration or tampering of confidential data (e.g., stored in connection with a CXL device), intellectual property theft, or denial of service, among other examples.; Examples of asset-based threats may include security threats relating to firmware confidentiality and/or firmware integrity; In other words, if a CXL device is based on a previous architecture that was subject to one or more security threats, the one or more security threats may be architecture re-use threats for the CXL device.; [0052] The device 500 may compute a measure of FSD 508, which may be an open firmware component. For example, the device 500 may hash a set of open images of FSD 508 (e.g., code, data, or configuration information stored in or associated with FSD 508, or a portion of that code, data, or configuration information) to determine a measure of FSD 508.; Furthermore another interpretation is [0072] the machine learning system may pre-process and/or perform dimensionality reduction to reduce the feature set and/or combine features of the feature set to a minimum feature set.))
and further indicates a history of any performance issues ([0070] As shown by reference number 705, a machine learning model may be trained using a set of observations. The set of observations may be obtained and/or input from training data (e.g., historical data), such as data gathered during one or more processes described herein.)
and security issues experienced by the loT device; (0086; prior observations; 0059; For example, the security threat analysis system 604 may use information regarding previous security threats to other devices to identify application domain security threats, architecture re-use security threats, asset-based security threats, or state-of-the-art attack based security threats for a CXL device or a computing environment that includes a CXL device. 0063; For example, the security threat analysis system 604 may use a machine learning model trained on data regarding security threats, classifications and scores for security threats, damage done by prior security threats, or an effect of mitigation actions applied to prior security threats, among other examples, to select one or more mitigation actions to implement for a device under analysis )
after the dataset is pre-processed, providing the dataset (observations) as an input to a machine learning model; ([0086] As shown by reference number 810, the machine learning system may receive a new observation (or a set of new observations), and may input the new observation to the machine learning model 805. As shown, the new observation may include a first feature of type of threat, a second feature of an amount of elapsed time, a third feature of a level of expertise, and so on, as an example.)
using the machine learning model to generate, based on the input, (machine learning system inputs observations and outputs predicted values for the target variables) respective target variable value predictions for each target variable in a group of target variables, (Fig 7; 715 target variable; Fig 8; 815 target variable prediction) wherein a first one of the target variables corresponds to the software configuration, (encryption level configuration, secure CXL IDE, adjusting a descriptor) and a second one of the target variables corresponds to the history (all the observations are part of the history and therefore all predicted values correspond to the history); (For each new observation the system outputs a predicted threat value for each observation, as seen in 0087 one predicted value can be for implementing a secure CXL IDE which can be for the software configuration, another is updating a descriptor, or increasing encryption, [0086] As shown by reference number 810, the machine learning system may receive a new observation (or a set of new observations); [0087] In some implementations, the trained machine learning model 805 may predict a value of 12 (e.g., moderately critical) for the target variable of an attack potential for the new observation, as shown by reference number 815. Based on this prediction (e.g., based on the value having a particular label or classification or based on the value satisfying or failing to satisfy a threshold), the machine learning system may provide a recommendation and/or output for determination of a recommendation, such as to implement a secure CXL IDE. Additionally, or alternatively, the machine learning system may perform an automated action and/or may cause an automated action to be performed (e.g., by instructing another device to perform the automated action), such as updating a design specification for a CXL device to include the secure CXL IDE as a design specification for the CXL device. As another example, if the machine learning system were to predict a value of 18 for the target variable of 24 (e.g., very non-critical), then the machine learning system may provide a different recommendation (e.g., to update a design document to identify the security threat as infeasible or not to be mitigated at present) and/or may perform or cause performance of a different automated action (e.g., increasing an encryption level). In some implementations, the recommendation and/or the automated action may be based on the target variable value having a particular label (e.g., classification or categorization) and/or may be based on whether the target variable value satisfies one or more thresholds (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, or falls within a range of threshold values).
and when the target variable value predictions (predicted values) indicate a potential security issue (when the target value satisfies a threshold indicating security issue) and/or a potential performance issue,(See claim interpretation section for “or” limitations) with the loT device, (see mapping above) taking a remedial action to resolve the potential security issue (increasing encryption level) and /or the potential performance issue. (see claim interpretation section on “or” limitations) ((0087; As another example, if the machine learning system were to predict a value of 18 for the target variable of 24 (e.g., very non-critical), then the machine learning system may provide a different recommendation (e.g., to update a design document to identify the security threat as infeasible or not to be mitigated at present) and/or may perform or cause performance of a different automated action (e.g., increasing an encryption level). In some implementations, the recommendation and/or the automated action may be based on the target variable value having a particular label (e.g., classification or categorization) and/or may be based on whether the target variable value satisfies one or more thresholds (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, or falls within a range of threshold values))
Orlando does not explicitly teach the underlined
pre-processing a dataset, wherein the dataset includes data and metadata that indicates a software configuration of each of a plurality of internet of things (loT) devices, and further indicates a history of any performance issues and security issues experienced by each of the IoT devices; wherein the dataset is maintained in a centralized device component repository that tracks, for each of the loT devices, corresponding configuration data that includes: corresponding operating system (OS) information, corresponding software version information, and corresponding patch information, and wherein the centralized device component repository maintains current versioning of the configuration data such that, after any remedial actions are performed on the loT devices, the dataset in the centralized device component repository is correspondingly updated;
taking a remedial action to resolve the potential security issue and the potential performance issue based on a combination of the software configuration and any patches thereto used in the one loT device.
In an analogous art Wollny teaches
pre-processing a dataset, (0005; 0019-0023; collecting operating data of a plurality of computing devices) wherein the dataset includes data (0020; collecting a range of operating data) and metadata (0023; operating data includes a hardware profile of the device) that indicates a software configuration (0020; the data collection agents also collect software configurations of each computing device) of each of a plurality of internet of things (loT) devices, (0019-0023; data is collected for a plurality of computing devices; 0033; where the device can be an IoT device) and further indicates a history of any performance issues (0029-0032 operating errors) and security issues (0008; errors include security vulnerabilities; 0034; Some error conditions may be selected or targeted as known error conditions, vulnerabilities, exploits, or bugs through third party data sources 314 ) experienced by each of the IoT devices;(see mapping above indicating this process is performed for a plurality of computing devices) (0029-0032; In some embodiments, additional entries may be added to the device status data based upon information from other data sources, such as information relating to device support actions (e.g., error reports or support tickets from traditional device support management and reporting systems). Generating the device status data set may include combining operating data from multiple data sources, which may be updated at different times. The device status data set may further be generated over time by adding data received at different times, such as by combining periodic updates associated with separate time intervals from data collection agents of the plurality of computing devices. By generating a device status data set based upon operating data from a sufficiently large number of computing devices over a sufficiently long time interval, the device status data set will include both normal operating entries for each of the plurality of computing devices and error condition entries for at least some of the computing devices. Information regarding remediation actions previously taken and results of such actions may also be included, either from the operating data or from additional data sources associated with the computing devices (e.g., device support records).; When populated with previously formulated solutions, the knowledge base may be used for automating aspects of error condition remediation. Thus, the method may further comprise determining that a preventive solution to an identified error condition is identified in the knowledge base as a known solution based upon previous analysis. Based upon such determination, the corrective script may be selected from the knowledge base based upon the error condition and the one or more predictive indicators)
wherein the dataset is maintained in a centralized (0048; support servers may be centrally located at one location or may be distributed across a plurality of sites for improved reliability, reduced latency, or data residency requirements.) device component repository (database) that tracks,(events logs) for each of the loT devices (of all the Iot devices), (0032; In some embodiments, one or more predictive model may be trained for each of a plurality of error conditions or for each of a plurality of device types (e.g., desktop computers, notebook computers, tablets, smartphones, network devices, or IoT devices). [0031] At block 152, the one or more servers collect operating data from a plurality of computing devices via a network. The operating data may comprise event logs and other device status data, such as hardware profiles or lists of adjustable operating parameters, which may be received in one or more messages from the computing devices. As described above, part or all of the operating data may be received from a plurality of data collection agents installed in the computing devices. In some embodiments, a database of computing device profiles may be accessed to obtain device profiles (e.g., hardware profiles, software profiles, or license profiles). [0043] The operating data received by the data collection interface 210 may be sent via a data pipeline 212 to a data warehouse 214 for storage. The operating data may be stored in one or more data stores (e.g., SQL or NoSQL databases) of the data warehouse 214 for further analysis by an analysis and research interface 216. Part or all of the received operating data may additionally be provided directly to the analysis and research interface 216 in order to provide current analytics, such as providing dashboards 220 of current operating status data to service users 226 (e.g., network or device analysts monitoring real-time system performance on an ongoing basis). The analysis and research interface 216 may also obtain operating data from the data warehouse 214, which may be analyzed to perform aspects of the proactive support techniques described herein (e.g., training machine learning models, identifying predictive indicators, generating corrective scripts, or identifying affected computing devices). To perform such proactive support actions, the analysis and research interface 216 may be implemented by one or more proactive support servers (e.g., the proactive support servers 340 of FIG. 3).)
corresponding configuration data that includes:
corresponding operating system (OS) information, (mapping above + 0016; Such operating parameters may be associated with hardware or software configurations, applications, settings, operating systems, or versions. Additional, fewer, or alternative aspects may be included in various embodiments, as described herein; 0037; In some embodiments, the affected computing devices may be identified as computing devices running software associated with an error condition (e.g., an operating system or application for which a conditional or unconditional corrective script exists), computing devices having hardware associated with an error condition, or computing devices associated with user accounts identified as having current or potential future error conditions. See also 0042; 0044)
corresponding software version information, (mapping above + 0016; 0034; 0020; Such data collection agents may further be configured to periodically record system operating parameters, such as configuration parameters, settings selected, versions of software installed or running, resource utilization levels, connected devices, or other similar information regarding the current status of the computing device. 0034; The predictive indicators may include device status indicators, such as hardware or software features, configurations, parameters, versions, or settings. Additionally or alternatively, the predictive indicators may include operating data patterns, such as combinations of entries in event logs received from data collection agents of computing devices associated with one or more error conditions.)
and corresponding patch information, (Event log information includes the corrective script information and successful or unsuccessful execution; [0028] At block 118, in some embodiments, the computing device may generate and send a confirmation message to the server to indicate either successful or unsuccessful execution of the corrective script. The confirmation message may include a log file detailing changes made to the computing device and any errors encountered. Additionally or alternatively, the changes implemented by the corrective script may be recorded in the event logs of the one or more data collection agents. If the corrective script is a conditional corrective script, the confirmation message may provide information regarding the trigger condition detected prior to execution of the corrective script. Monitoring of the computing device continues at block 104 during continued operation of the computing device.)
and wherein the centralized device component repository maintains current versioning of the configuration data such that, ([0020] At block 102, one or more data collection agents are installed on the computing device. Each data collection agent is configured to monitor and record information regarding operation of the computing device, such as system and application operations performed or attempted. These operations may be recorded in an event log as events occurring at the computing device, such as user log-in events, application launch events, application close events, error code events, memory access events, network access events, input events, output events, etc. Such data collection agents may further be configured to periodically record system operating parameters, such as configuration parameters, settings selected, versions of software installed or running, resource utilization levels, connected devices, or other similar information regarding the current status of the computing device. [0017] FIGS. 1A-B illustrate flow diagrams of exemplary methods for monitoring operation of computing devices (e.g., endpoint computing devices associated with end users), identifying predictive indicators of error conditions that may affect some of the computing devices, and proactively remediating the operation of the computing devices to avoid the error conditions. The exemplary methods may be implemented in whole or part by processors of computing devices in coordination with processors of remote servers. Thus, each of a plurality of computing devices may implement the exemplary method of FIG. 1A to provide operating data regarding such devices to one or more servers implementing aspects of the exemplary method of FIG. 1B to identify and correct potential issues with at least some of the computing devices. These exemplary methods may be implemented by components of the exemplary computing system 300 illustrated in FIG. 3, described in further detail below. Further embodiments may include additional or alterative actions and may involve alternative configurations or components.)
after any remedial actions are performed on the loT devices, (after running a corrective script and logging that information in the event logs along with any changes see 0027;0028) the dataset in the centralized device component repository is correspondingly updated; ([0020] At block 102, one or more data collection agents are installed on the computing device. Each data collection agent is configured to monitor and record information regarding operation of the computing device, such as system and application operations performed or attempted. These operations may be recorded in an event log as events occurring at the computing device, such as user log-in events, application launch events, application close events, error code events, memory access events, network access events, input events, output events, etc. Such data collection agents may further be configured to periodically record system operating parameters, such as configuration parameters, settings selected, versions of software installed or running, resource utilization levels, connected devices, or other similar information regarding the current status of the computing device. [0021] At block 104, the one or more data collection agents execute on the computing device to monitor the operating status of the computing device. Monitoring the operating status includes detecting events occurring at the computing device, as well as periodically or episodically detecting operating parameters of the computing device. The one or more data collection agents run in the background on the client computing device to obtain such operating data, which may be stored in temporary files or added to one or more log files. [0017] FIGS. 1A-B illustrate flow diagrams of exemplary methods for monitoring operation of computing devices (e.g., endpoint computing devices associated with end users), identifying predictive indicators of error conditions that may affect some of the computing devices, and proactively remediating the operation of the computing devices to avoid the error conditions. The exemplary methods may be implemented in whole or part by processors of computing devices in coordination with processors of remote servers. Thus, each of a plurality of computing devices may implement the exemplary method of FIG. 1A to provide operating data regarding such devices to one or more servers implementing aspects of the exemplary method of FIG. 1B to identify and correct potential issues with at least some of the computing devices. These exemplary methods may be implemented by components of the exemplary computing system 300 illustrated in FIG. 3, described in further detail below. Further embodiments may include additional or alterative actions and may involve alternative configurations or components. [0032] At block 154, the one or more servers generate a device status data set based upon the collected operating data. The device status data set is generated to comprise a plurality of entries for each of the plurality of computing devices, each of which entries may store the collected operating data or derived values generated from the operating data. In some embodiments, additional entries may be added to the device status data based upon information from other data sources, such as information relating to device support actions (e.g., error reports or support tickets from traditional device support management and reporting systems). Generating the device status data set may include combining operating data from multiple data sources, which may be updated at different times. The device status data set may further be generated over time by adding data received at different times, such as by combining periodic updates associated with separate time intervals from data collection agents of the plurality of computing devices. By generating a device status data set based upon operating data from a sufficiently large number of computing devices over a sufficiently long time interval, the device status data set will include both normal operating entries for each of the plurality of computing devices and error condition entries for at least some of the computing devices. Information regarding remediation actions previously taken and results of such actions may also be included, either from the operating data or from additional data sources associated with the computing devices (e.g., device support records).)
taking a remedial action to resolve (0016; In some embodiments, the affected computing devices may then be proactively adjusted or updated by running corrective scripts prior to the appearance of user-observable symptoms of the error conditions. In other embodiments, the affected computing devices may be provisioned with corrective scripts configured to be automatically run upon detection of a trigger condition associated with an error condition to remediate the error condition; 0018; The monitoring method 100 may be implemented at each of a plurality of computing devices in order to facilitate further analysis and remediation of error conditions. See also 0009;
the potential security issue (0008; errors include security vulnerabilities; 0034; Some error conditions may be selected or targeted as known error conditions, vulnerabilities, exploits, or bugs through third party data sources 314; 0016-0017; 0034; 0037; potential errors) and the potential performance issue (0029-0032 operating errors) based on a combination of the software configuration (adjusting configuration parameters) and any patches (software patches) thereto used in the one loT device.(see mapping above for IoT devices) (0035; n further embodiments, the corrective scripts may be generated based upon information relating to known solutions to error conditions (e.g., software patches) retrieved from third party resource repositories (e.g., third party data sources 314). In yet further embodiments, components of the corrective scripts may be retrieved from a knowledge base storing previously determined solutions for known error conditions, which may be combined into a corrective script.; For example, a corrective script may be generated to adjust settings or configuration options of hardware or software components of affected computing devices to reduce or eliminate the probability of an error condition at such computing devices. 0044; The various reports 222 may be generated for service users 226 for use in analyzing and correcting error conditions, as well as for product owners 228 for use in patching or avoiding problems with particular software or hardware products. Additionally, the analysis and research interface 216 may generate, select, or receive corrective scripts 218 to be sent to the computing device 202, as described elsewhere herein. See 0027 discussing adjusting operating parameters)
It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Orlando] to include [determining configuration and patch adjustments for a plurality of devices] as is taught by [Wollny].
The suggestion/motivation for doing so is to support and maintain endpoint devices using proactive analytics [0001].
Regarding claim 5, Orlando in view of Wollny teach the method as recited in claim 1, and is disclosed above, Orlando further teaches wherein the pre-processing comprises separating the target variables (reducing the variables in the set) from other elements of the dataset (0072; Furthermore another interpretation is [0072] the machine learning system may pre-process and/or perform dimensionality reduction to reduce the feature set and/or combine features of the feature set to a minimum feature set.))
Regarding claim 7, Orlando in view of Wollny teach the method as recited in claim 1, and is disclosed above, Orlando further teaches wherein an alert is generated and transmitted (generate and output) when the target variable value predictions indicate a potential security issue (predicted value exceeds threshold) and a potential performance issue (see claim interpretation section on “or” limitations) (Mapping claim 1 + [0066] In some implementations, the security threat analysis system 604 may document a set of security threats. For example, the security threat analysis system 604 may output information or an alert identifying the set of security threats; 0087; then the machine learning system may provide a different recommendation (e.g., to update a design document to identify the security threat as infeasible or not to be mitigated at present) and/or may perform or cause performance of a different automated action (e.g., increasing an encryption level). In some implementations, the recommendation and/or the automated action may be based on the target variable value having a particular label (e.g., classification or categorization) and/or may be based on whether the target variable value satisfies one or more thresholds (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, or falls within a range of threshold values)))
Regarding claim 8, Orlando in view of Wollny teach the method as recited in claim 1, and is disclosed above, Orlando further teaches wherein the software configuration comprises any one or more of an operating system (OS) of each loT device, version information about software on each loT device, and a combination of software on each loT device (Mapping claim 1 + 0087; such as updating a design specification for a CXL device to include the secure CXL IDE; increasing an encryption level; 0088; [0088] In this way, the machine learning system may apply a rigorous and automated process to identifying, classifying, and mitigating security threats, such as for CXL devices. See also 0057 discussion on software assets and threats)
Regarding claim 9, Orlando teaches the method as recited in claim 1, and is disclosed above Orlando further teaches wherein the model comprises a deep neural network comprising an input layer with multiple neurons, and each of the neurons corresponds to a respective influencing variable. (0084; deep neural network; examiner notes all neural networks include an input layer, and an input layer inherently comprises many neurons perform a calculation on some variable, and since a neuron is making the calculation, it is equivalent to “influencing”)
Regarding claim 11, the claim inherits the rejection of claim 1 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Regarding claim 15, the claim inherits the rejection of claim 5 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Regarding claim 17, the claim inherits the rejection of claim 7 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Regarding claim 18, the claim inherits the rejection of claim 8 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Regarding claim 19, the claim inherits the rejection of claim 9 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orlando et al. (US 20230394140 A1) hereinafter Orlando, in view of Wollny et al. (US 20230161662 A1) hereinafter Wollny, in view of Merchant et al. (US 11415425 B1)
Regarding claim 6, Orlando in view of Wollny teach the method as recited in claim 1, and is disclosed above, Orlando in view of Wollny do not explicitly teach wherein each loT device comprises an internet of medical things (loMT) device.
In an analogous art Merchant teaches wherein each loT device comprises an internet of medical things (loMT) device. (Col 18 Lines 7-55; Example devices for monitoring and detecting anomalous behavior include Internet of medical things IoMT)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to modify the teachings of Orlando in view of Wollny to include Iot device such as IoMT devices as is taught by Merchant.
The suggestion/motivation is to improve detection of anomalous activity [Background]
Regarding claim 16, the claim inherits the rejection of claim 6 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Claim(s) 2, 10, 12, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orlando et al. (US 20230394140 A1) hereinafter Orlando, in view of Wollny et al. (US 20230161662 A1) hereinafter Wollny, in view of Carreira et al. (WO 2019134987 A1)
Regarding claim 2, Orlando in view of Wollny teach the method as recited in claim 1, and is disclosed above, Orlando in view of Wollny do not disclose wherein the machine learning model comprises a multi-output neural network that includes multiple parallel branches, and each of the branches corresponds to a respective one of the target variables.
In an analogous art Carreira teaches wherein the machine learning model comprises a multi-output neural network that includes multiple parallel branches, and each of the branches corresponds to a respective one of the target variables. (Pg. 11 ¶ 7 [Wingdings font/0xE0] Pg. 12 ¶ 1; For example, some or all of the layer blocks can be configured as Inception blocks, which include multiple parallel branches of neural network layers, e.g., branches of convolutional layers with various filter sizes and a branch with a max pooling layer, whose outputs are concatenated to generate the output of the Inception block.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to modify the teachings of Orlando in view of Wollny to include a multi output neural network with parallel branches given an output based on the processing as is taught by Carreira
The suggestion/motivation is to improve processing using neural networks [Background]
Regarding claim 10, Orlando in view of Wollny teach the method as recited in claim 1, and is disclosed above, Orlando further teaches wherein the model comprises a deep neural network comprising multiple hidden layers, (0084; deep neural network; examiner notes multiple hidden layers are what makes a neural network a “deep” neural network)
Orlando in view of Wollny do not explicitly teach as well as two parallel output branches that communicate with the multiple hidden layers, and each of the output branches corresponds to a respective one of the target variables.
In an analogous art Carreira teaches as well as two parallel output branches that communicate with the multiple hidden layers, and each of the output branches (parallel branches that output data) corresponds to a respective one of the target variables (concatenated outputs) (Pg. 11 ¶ 7 [Wingdings font/0xE0] Pg. 12 ¶ 1; For example, some or all of the layer blocks can be configured as Inception blocks, which include multiple parallel branches of neural network layers, e.g., branches of convolutional layers with various filter sizes and a branch with a max pooling layer, whose outputs are concatenated to generate the output of the Inception block.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to modify the teachings of Orlando in view of Wollny to include a multi output neural network with parallel branches given an output based on the processing as is taught by Carreira
The suggestion/motivation is to improve processing using neural networks [Background]
Regarding claim 12, the claim inherits the rejection of claim 2 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Regarding claim 20, the claim inherits the rejection of claim 10 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orlando et al. (US 20230394140 A1) hereinafter Orlando, in view of Wollny et al. (US 20230161662 A1) hereinafter Wollny, in view of Song et al. (US 20230153460 A1)
Regarding claim 3, Orlando in view of Wollny teach the method as recited in claim 1, Orlando in view of Wollny do not explicitly teach wherein the model performs a respective softmax activation to obtain each of the predicted target values.
In an analogous art Song teaches wherein the model performs a respective softmax activation to obtain each of the predicted target values (0053; Using the persona predictor model .sub.p with softmax activation to learn pip, so as to obtain the final objective for the defender; 0040; machine learning model)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to modify the teachings of Orlando in view of Wollny to include a model performing softmax activation for learning target values as is taught by Song
The suggestion/motivation is to improve attack protection [0002]
Regarding claim 13, the claim inherits the rejection of claim 3 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Orlando et al. (US 20230394140 A1) hereinafter Orlando, in view of Wollny et al. (US 20230161662 A1) hereinafter Wollny, in view of Nissim et al. (US 20230409715 A1)
Regarding claim 4, Orlando in view of Wollny teach the method as recited in claim 1, and is disclosed above, Orlando in view of Wollny do not explicitly teach wherein the input is received by the model through a single input layer of the model
In an analogous art Nissim teaches wherein the input is received by the model through a single input layer of the model (0290; neural network (NN) consists of a single input layer; examiner notes input layers inherently receive data as input)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to modify the teachings of Orlando in view of Wollny to include a single input layer of a neural network to receive input as is taught by Nissim
The suggestion/motivation is to improve malware detection [0001]
Regarding claim 14, the claim inherits the rejection of claim 4 for reciting similar limitations in the form of a non-transitory medium claim (0026; non-transitory medium storing instructions executed by hardware)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDERRAHMEN H CHOUAT whose telephone number is (571)431-0695. The examiner can normally be reached on Mon-Fri from 9AM to 5PM PST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry, can be reached at telephone number 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice.
Abderrahmen Chouat
Examiner
Art Unit 2451
/Chris Parry/Supervisory Patent Examiner, Art Unit 2451