Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office Action has been withdrawn pursuant to 37 CFR 1.114.
Claim Rejections - 35 USC § 102
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office Action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
5. Claims 1, 6-8, 10, and 29 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Non-Patent Literature “Isolation Forest” (“Liu”, previously discussed by the Examiner in the Advisory Action mailed 11/18/25 and made of record at that time).
Regarding claim 1, LIU teaches A computer-implemented (page 418’s section 5 discussing the use of CPUs and threaded jobs, i.e., clear use of computers, to perform the experimentation and evaluation on the framework’s test and inference/evaluation stages discussed per section 4) method for training a machine learning algorithm to detect at least one anomaly in at least one process (pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), the computer-implemented method comprising:
providing a training dataset of a process, the training dataset including a first number of training usage sequences, and the first number being greater than one (page 417’s sections 4 and 4.1: “training set”, which the Examiner understands to be inclusive of logged data (as discussed per section 1’s first paragraph) such that this data can be evaluated per anomaly detection, and such that this data could be understood to include instances of data as logged that are equivalent to “a first number of ... usage sequences”, which then if used to train per sections 4-4.1 then amount to “training usage sequences” as recited, and where all the data as logged and available could be understood to represent a recited “first number” of such data instances);
creating a second number of bootstrap datasets based on the training dataset (page 417’s section 4.1 discussing subsampling the training data set to create samples which the Examiner equates with the recited “bootstrap datasets”) by randomly drawing a third number of training usage sequences from the training dataset (page 416’s section 3, in its second paragraph, mentioning that “sub-sampling is conducted by random selection of instances”, and sections 4-4.1 further discuss a configurable sub-sampling size that is equivalent to how much data is drawn into each sub-sample), the second number being greater than one, and each draw of the drawing the third number being made from among the first number of training usage sequences (section 4.1 on page 418 mentions a configurable “number of tree” which, if defined, is equivalent to how many times a sub-sample is drawn from the training data set, and where per Figure 1C’s graph, the number of trees could be anywhere between 1-1000); and
training a machine learning algorithm to detect at least one anomaly in at least one process (pages 417-418’s sections 4-4.1), the training including creating a number of process trees (“number of trees”/ “ensemble size” per section 4.1 on page 418 specifically) equal to the second number (where the number of trees simply matches the number of sub-samples, and it follows that this is the most basic way to leverage any of the sub-samples that Liu contemplates, e.g. a 1-1 relationship between a sub-sample and a tree in the ensemble) using a process mining algorithm (pages 417-418’s algorithms 1-2, which the Examiner understands to be the algorithms used to transform logged process data into an ensemble model of trees), each respective process tree among the number of process trees being created based on one corresponding bootstrap dataset among the second number of bootstrap datasets (where the number of trees simply matches the number of sub-samples, and it follows that this is the most basic way to leverage any of the sub-samples that Liu contemplates, e.g. a 1-1 relationship between a sub-sample and a tree in the ensemble).
Regarding claim 6, Liu teaches the computer-implemented method of claim 1, wherein the machine learning algorithm is a random forest algorithm comprising the number of process trees (page 416’s section 3, in its second paragraph, mentioning that “sub-sampling is conducted by random selection of instances”, where this is understood to generate the trees in the ensemble relating to underlying logged process data (as the Examiner has discussed in relation to claim 1)).
Regarding claim 7, Liu teaches the computer-implemented method of claim 1, wherein the second number lies in a range of 50 to 200 (the number of trees is graphed in Figure 1C as shown on page 414, where the number as graphed ranges from 1 to 1000, thereby inclusive of the recited range of 50-200).
Regarding claim 8, Liu teaches the computer-implemented method of claim 1, wherein the third number is equal to or less than the first number (as discussed per claim 1, subsampling as taught may involve the drawing of a sub-sampling size number of data instances from the training dataset (section 4.1), and it reasons that the number drawn in a sub-sample would be less than the whole amount (e.g., “first number” as recited)).
Regarding claim 10, Liu teaches A data processing system, comprising at least one processor or electronic circuit for performing at least the computer-implemented method of claim 1 (section 5’s first paragraph on page 418, discussing the use of CPUs in relation to the section 4 stages for training and evaluation).
Regarding claim 29, Liu teaches the computer-implemented method of claim 1, wherein a trained machine learning algorithm including the number of process trees is configured to output a plurality of output values for classifying the process as one of normal or abnormal, the plurality of output values being output in response to input of a first usage sequence corresponding to the process (section 4 on page 417 discussing that each test instance as evaluated by the trained algorithm results in an anomaly score for the instance, and per section 5 on pages 418-419, anomalies (i.e., anomalous data instances) can be ranked in accordance to that score where the top n can be selected for reporting (i.e., providing a type of threshold for comparatively designating anomaly based on their score)).
Claim Rejections - 35 USC § 103
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
8. Claims 2-3, 15, 18, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Non-Patent Literature “evtree: Evolutionary Learning of Globally Optimal Classification and Regression Trees in R” (“Grubinger”).
Regarding claim 2, LIU teaches A computer-implemented (page 418’s section 5 discussing the use of CPUs and threaded jobs, i.e., clear use of computers, to perform the experimentation and evaluation on the framework’s test and inference/evaluation stages discussed per section 4) method for training a machine learning algorithm to detect at least one anomaly in at least one process (pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), the computer-implemented method comprising:
providing a training dataset of a process, the training dataset including a first number of training usage sequences, and the first number being greater than one (page 417’s sections 4 and 4.1: “training set”, which the Examiner understands to be inclusive of logged data (as discussed per section 1’s first paragraph) such that this data can be evaluated per anomaly detection, and such that this data could be understood to include instances of data as logged that are equivalent to “a first number of ... usage sequences”, which then if used to train per sections 4-4.1 then amount to “training usage sequences” as recited, and where all the data as logged and available could be understood to represent a recited “first number” of such data instances); and
training a machine learning algorithm to detect at least one anomaly in at least one process (pages 417-418’s sections 4-4.1), the training including creating a second number of process trees based on the training dataset (“number of trees”/ “ensemble size” per section 4.1 on page 418 specifically) using a process mining algorithm, the second number being greater than one ((pages 417-418’s algorithms 1-2, which the Examiner understands to be the algorithms used to transform logged process data into an ensemble model of trees), where the number of trees may range from 1 to 1000 as shown in Figure 1C on page 414).
Liu does not teach that the creating including randomly selecting one among a plurality of split operator types for each operator node of each process tree among the second number of process trees. Rather, the Examiner relies upon GRUBINGER to teach what Liu otherwise lacks, see e.g., Grubinger’s section 3.1 discussing “random split variable selection” in relation to node creation for process tree generation.
Both Liu and Grubinger relate to the generation of process trees in relation to a dataset. Hence, they are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the use of randomly selected split rules/operators, per Grubinger, to generate trees for the forest in a framework such as Liu’s, with a reasonable expectation of success, such as to the efficiently populate Liu’s forest with trees that are distinct in a manner that is organized by a scheme.
Regarding claim 3, LIU teaches A computer-implemented (page 418’s section 5 discussing the use of CPUs and threaded jobs, i.e., clear use of computers, to perform the experimentation and evaluation on the framework’s test and inference/evaluation stages discussed per section 4) method for training a machine learning algorithm to detect at least one anomaly in at least one process (pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), the computer-implemented method comprising:
providing a training dataset of a process, the training dataset including a first number of training usage sequences, and the first number being greater than one (page 417’s sections 4 and 4.1: “training set”, which the Examiner understands to be inclusive of logged data (as discussed per section 1’s first paragraph) such that this data can be evaluated per anomaly detection, and such that this data could be understood to include instances of data as logged that are equivalent to “a first number of ... usage sequences”, which then if used to train per sections 4-4.1 then amount to “training usage sequences” as recited, and where all the data as logged and available could be understood to represent a recited “first number” of such data instances);
creating a second number of bootstrap datasets based on the training dataset (page 417’s section 4.1 discussing subsampling the training data set to create samples which the Examiner equates with the recited “bootstrap datasets”) by randomly drawing a third number of training usage sequences from the training dataset (page 416’s section 3, in its second paragraph, mentioning that “sub-sampling is conducted by random selection of instances”, and sections 4-4.1 further discuss a configurable sub-sampling size that is equivalent to how much data is drawn into each sub-sample), the second number being greater than one, and each draw of the drawing the third number is made from among the first number of training usage sequences (section 4.1 on page 418 mentions a configurable “number of tree” which, if defined, is equivalent to how many times a sub-sample is drawn from the training data set, and where per Figure 1C’s graph, the number of trees could be anywhere between 1-1000); and
training a machine learning algorithm to detect at least one anomaly in at least one process (pages 417-418’s sections 4-4.1), the training including creating a number of process trees (“number of trees”/ “ensemble size” per section 4.1 on page 418 specifically) equal to the second number (where the number of trees simply matches the number of sub-samples, and it follows that this is the most basic way to leverage any of the sub-samples that Liu contemplates, e.g. a 1-1 relationship between a sub-sample and a tree in the ensemble) using a process mining algorithm (pages 417-418’s algorithms 1-2, which the Examiner understands to be the algorithms used to transform logged process data into an ensemble model of trees), each respective process tree among the number of process trees being created based on one corresponding bootstrap dataset from among the second number of bootstrap datasets (where the number of trees simply matches the number of sub-samples, and it follows that this is the most basic way to leverage any of the sub-samples that Liu contemplates, e.g. a 1-1 relationship between a sub-sample and a tree in the ensemble).
Liu does not teach that the creating including randomly selecting one among a plurality of split operator types for each operator node of each process tree among the second number of process trees. Rather, the Examiner relies upon GRUBINGER to teach what Liu otherwise lacks, see e.g., Grubinger’s section 3.1 discussing “random split variable selection” in relation to node creation for process tree generation.
Both Liu and Grubinger relate to the generation of process trees in relation to a dataset. Hence, they are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the use of randomly selected split rules/operators, per Grubinger, to generate trees for the forest in a framework such as Liu’s, with a reasonable expectation of success, such as to the efficiently populate Liu’s forest with trees that are distinct in a manner that is organized by a scheme.
Regarding claim 15, Liu in view of Grubinger teach the computer-implemented method of claim 2, as discussed above. The aforementioned references further teach the additional limitation wherein the machine learning algorithm is a random forest algorithm comprising the second number of process trees (Liu: page 416’s section 3, in its second paragraph, mentioning that “sub-sampling is conducted by random selection of instances”, where this is understood to generate the trees in the ensemble relating to underlying logged process data (as the Examiner has discussed in relation to claim 1)).
Regarding claim 18, Liu in view of Grubinger teach the computer-implemented method of claim 3, as discussed above. The aforementioned references further teach the additional limitation wherein the machine learning algorithm is a random forest algorithm comprising the number of process trees (Liu: page 416’s section 3, in its second paragraph, mentioning that “sub-sampling is conducted by random selection of instances”, where this is understood to generate the trees in the ensemble relating to underlying logged process data (as the Examiner has discussed in relation to claim 1)).
Regarding claim 22, Liu in view of Grubinger teach the computer-implemented method of claim 2, as discussed above. The aforementioned references further teach the additional limitation for A data processing system, comprising at least one processor or electronic circuit (Liu: section 5’s first paragraph on page 418, discussing the use of CPUs in relation to the section 4 stages for training and evaluation) for performing at least the computer-implemented method of claim 2.
Regarding claim 23, Liu in view of Grubinger teach the computer-implemented method of claim 3, as discussed above. The aforementioned references further teach the additional limitation for A data processing system, comprising at least one processor or electronic circuit (Liu: section 5’s first paragraph on page 418, discussing the use of CPUs in relation to the section 4 stages for training and evaluation) for performing at least the computer-implemented method of claim 3.
9. Claims 4 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of previously-cited U.S. Patent Application Publication No. 2022/0075705 (“Scheepens”).
Regarding claim 4, Liu teaches the computer-implemented method of claim 1, as discussed above. The aforementioned reference does not teach the further limitation specifically wherein the process mining algorithm is an Inductive Miner algorithm. Rather, the Examiner relies upon SCHEEPENS to teach what Liu otherwise lacks, see e.g., Scheepens’s [0001] discussing the applicability of inductive miner to process tree discovery tasks.
Like Liu, Scheepens is involved with data mining and relatedly data discovery, e.g. to generate process trees/graphs and the like. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Scheepens’s specific algorithm into Liu’s similar feature to achieve the same or similar result, with a reasonable expectation of success, to realize the usability, understandability, and accuracy objectives discussed per Scheepens’s [0023] in Liu’s modified framework.
Regarding claim 21, Liu in view of Scheepens teach the computer-implemented method of claim 4, as discussed above. The aforementioned references further teach the additional limitations wherein the machine learning algorithm is a random forest algorithm comprising the number of process trees (Liu: page 416’s section 3, in its second paragraph, mentioning that “sub-sampling is conducted by random selection of instances”, where this is understood to generate the trees in the ensemble relating to underlying logged process data (as the Examiner has discussed in relation to claim 1)).
10. Claims 5, 9, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of previously-cited U.S. Patent Application Publication No. 2019/0138542 (“Van Beest”).
Regarding claim 5, Liu teaches the computer-implemented method of claim 1, as discussed above. The aforementioned reference does not teach the further limitation wherein each among the first number of training usage sequences includes a sequence of activities during use of a medical device for at least one of diagnosis of a patient or treatment of the patient. Rather, the Examiner relies upon VAN BEEST to teach what Liu otherwise lacks, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
Regarding claim 9, Liu teaches the computer-implemented method of claim 1, as discussed above. The aforementioned reference does not teach the further limitation wherein the first number of training usage sequences are usage sequences of a medical device. Rather, the Examiner relies upon VAN BEEST to teach what Liu otherwise lacks, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
Regarding claim 12, Liu teaches the data processing system of claim 10, as discussed above. The aforementioned reference does not teach the additional limitation for A system for detecting at least one anomaly in at least one process (as discussed per claim 1, Liu’s pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), comprising: a ... device configured to log usage sequences relating to processes and the data processing system of claim 10, the data processing system being communicatively connected to the ... device, and the data processing system being configured to receive the log usage sequences from the ... device (as discussed per claim 1, but also as just mentioned above, Liu’s page 413, section 1, 1st paragraph mentions the logging of process data and the use thereof for teaching/training and inference/evaluation aspects of a machine-learned framework to detect anomalies). Liu, as discussed thus far, does not explicitly teach that the device is a medical device, although it is open-ended in terms of entertaining extensibility to “various application domains.” Rather, the Examiner relies upon VAN BEEST to teach what Liu otherwise lacks, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
11. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of previously-cited U.S. Patent Application Publication No. 2021/0216925 (“Dixit”).
Regarding claim 11, Liu teaches the computer-implemented method of claim 1, as discussed above. The aforementioned reference teaches A computer-implemented (page 418’s section 5 discussing the use of CPUs and threaded jobs, i.e., clear use of computers, to perform the experimentation and evaluation on the framework’s test and inference/evaluation stages discussed per section 4) method for detecting at least one anomaly in at least one process (pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), comprising: receiving a first usage sequence of the process (page 417’s sections 4 and 4.1: “training set”, which the Examiner understands to be inclusive of logged data (as discussed per section 1’s first paragraph) such that this data can be evaluated per anomaly detection, and such that this data could be understood to include instances of data as logged that are equivalent to “a first number of ... usage sequences”, which then if used to train per sections 4-4.1 then amount to “training usage sequences” as recited, and where all the data as logged and available could be understood to represent a recited “first number” of such data instances)
but does not teach the further limitations for determining a prediction vector based on the first usage sequence using a trained machine learning algorithm trained by the computer-implemented method of claim 1, the prediction vector including a corresponding value for each respective process tree among the number of process trees, the corresponding value indicating whether the first usage sequence fits the respective process tree and determining a normalized fitness value from the prediction vector and the second number and classifying the process based on the normalized fitness value. Rather, the Examiner relies upon DIXIT to teach what Liu otherwise lacks, see e.g., Dixit: the vectorization of the process data as taught per Dixit’s [0111]-[0112]; and data is subject to normalization per Dixit’s [0296], [0300], and [0304] to facilitate decision making, and where the decision is made in part based on a “fitness metric” per Dixit’s [0054].
Dixit and Liu contemplate the same/similar problem/challenge in the state of the art (e.g., anomaly detection based on logged data using machine-learning approaches). Hence, they are highly analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Dixit’s vectorized approach into Liu’s framework, with a reasonable expectation of success, such that advantages related to Dixit’s data management aspects can be realized in a framework like Liu.
12. Claims 13 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Grubinger and further in view of Scheepens.
Regarding claim 13, Liu in view of Grubinger teach the computer-implemented method of claim 2, as discussed above, but not the further limitation wherein the process mining algorithm is an Inductive Miner algorithm. Rather, the Examiner relies upon SCHEEPENS to teach what Liu otherwise lacks, see e.g., Scheepens’s [0001] discussing the applicability of inductive miner to process tree discovery tasks.
Like Liu, Scheepens is involved with data mining and relatedly data discovery, e.g. to generate process trees/graphs and the like. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Scheepens’s specific algorithm into Liu’s similar feature to achieve the same or similar result, with a reasonable expectation of success, to realize the usability, understandability, and accuracy objectives discussed per Scheepens’s [0023] in Liu’s modified framework.
Regarding claim 16, Liu in view of Grubinger teach the computer-implemented method of claim 3, as discussed above, but not the further limitation wherein the process mining algorithm is an Inductive Miner algorithm. Rather, the Examiner relies upon SCHEEPENS to teach what Liu etc. otherwise lack, see e.g., Scheepens’s [0001] discussing the applicability of inductive miner to process tree discovery tasks.
Like Liu, Scheepens is involved with data mining and relatedly data discovery, e.g. to generate process trees/graphs and the like. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Scheepens’s specific algorithm into Liu’s similar feature to achieve the same or similar result, with a reasonable expectation of success, to realize the usability, understandability, and accuracy objectives discussed per Scheepens’s [0023] in Liu’s modified framework.
13. Claims 14, 17, and 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Grubinger and further in view of Van Beest.
Regarding claim 14, Liu in view of Grubinger teach the computer-implemented method of claim 2, as discussed above, but not the further limitation wherein each among the first number of training usage sequences includes a sequence of activities during use of a medical device for at least one of diagnosis of a patient or treatment of the patient. Rather, the Examiner relies upon VAN BEEST to teach what Liu etc. otherwise lack, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
Regarding claim 17, Liu in view of Grubinger teach the computer-implemented method of claim 3, as discussed above, but not the further limitation wherein each among the first number of training usage sequences includes a sequence of activities during use of a medical device for at least one of diagnosis of a patient or treatment of the patient. Rather, the Examiner relies upon VAN BEEST to teach what Liu etc. otherwise lack, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
Regarding claim 27, Liu in view of Grubinger teach the data processing system of claim 22, as discussed above and A system for detecting at least one anomaly in at least one process (as discussed per claim 1, Liu’s pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), but not the further limitations for a medical device configured to log usage sequences relating to processes and the data processing system being communicatively connected to the medical device, and the data processing system being configured to receive the log usage sequences from the medical device. Rather, the Examiner relies upon VAN BEEST to teach what Liu etc. otherwise lack, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
Regarding claim 28, Liu in view of Grubinger teach the data processing system of claim 23, as discussed above and A system for detecting at least one anomaly in at least one process (as discussed per claim 1, Liu’s pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), but not the further limitations for a medical device configured to log usage sequences relating to processes and the data processing system being communicatively connected to the medical device, and the data processing system being configured to receive the log usage sequences from the medical device. Rather, the Examiner relies upon VAN BEEST to teach what Liu etc. otherwise lack, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
14. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Scheepens and further in view of Van Beest.
Regarding claim 20, Liu in view of Scheepens teach the computer-implemented method of claim 4, as discussed above, but not the further limitation wherein each among the first number of training usage sequences includes a sequence of activities during use of a medical device for at least one of diagnosis of a patient or treatment of the patient. Rather, the Examiner relies upon VAN BEEST to teach what Liu etc. otherwise lack, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
15. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Dixit and further in view of Van Beest.
Regarding claim 24, Liu in view of Dixit teach the computer-implemented method of claim 11, as discussed above, but not the further limitation wherein the computer-implemented method is for detecting at least one anomaly in at least one process by at least one medical device. Rather, the Examiner relies upon VAN BEEST to teach what Liu etc. otherwise lack, see e.g., Van Beest’s column 6 lines 60-67 discussing the use of medical devices explicitly in a person monitoring system resulting in the generation of event/log data, such as the type considered by Liu for example.
Like Liu, Van Beest relates to event/log data that can be evaluated for insights. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend the event logging generated in application uses per Liu to be inclusive of event logging in clinical contexts from a medical device, per Van Beest, with a reasonable expectation of success, such that the information/data pertinent to a patient’s treatment/care can be readily ingested from a device and subject to analysis and use, thereby promoting advantages typically associated with automation in the state of the art.
16. Claims 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Grubinger and further in view of Dixit.
Regarding claim 25, Liu in view of Grubinger teach the computer-implemented method of claim 2 and A computer-implemented (Liu: page 418’s section 5 discussing the use of CPUs and threaded jobs, i.e., clear use of computers, to perform the experimentation and evaluation on the framework’s test and inference/evaluation stages discussed per section 4) method for detecting at least one anomaly in at least one process (Liu: pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), comprising: receiving a first usage sequence of the process (Liu: page 417’s sections 4 and 4.1: “training set”, which the Examiner understands to be inclusive of logged data (as discussed per section 1’s first paragraph) such that this data can be evaluated per anomaly detection, and such that this data could be understood to include instances of data as logged that are equivalent to “a first number of ... usage sequences”, which then if used to train per sections 4-4.1 then amount to “training usage sequences” as recited, and where all the data as logged and available could be understood to represent a recited “first number” of such data instances)
but not the further limitations for determining a prediction vector based on the first usage sequence using a trained machine learning algorithm trained by the computer-implemented method of claim 2, the prediction vector including a corresponding value for each respective process tree among the second number of process trees, the corresponding value indicating whether the first usage sequence fits the respective process tree and determining a normalized fitness value from the prediction vector and the second number and classifying the process based on the normalized fitness value. Rather, the Examiner relies upon DIXIT to teach what Liu otherwise lacks, see e.g., Dixit: the vectorization of the process data as taught per Dixit’s [0111]-[0112]; and data is subject to normalization per Dixit’s [0296], [0300], and [0304] to facilitate decision making, and where the decision is made in part based on a “fitness metric” per Dixit’s [0054].
Dixit and Liu contemplate the same/similar problem/challenge in the state of the art (e.g., anomaly detection based on logged data using machine-learning approaches). Hence, they are highly analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Dixit’s vectorized approach into Liu’s framework, with a reasonable expectation of success, such that advantages related to Dixit’s data management aspects can be realized in a framework like Liu.
Regarding claim 26, Liu in view of Grubinger teach the computer-implemented method of claim 3 and A computer-implemented (page 418’s section 5 discussing the use of CPUs and threaded jobs, i.e., clear use of computers, to perform the experimentation and evaluation on the framework’s test and inference/evaluation stages discussed per section 4) method for detecting at least one anomaly in at least one process (pages 417-418, sections 4 and 4.1, discussing the training stage for the framework to teach/train it to perform anomaly detection, and further the implication is that this is applicable to “various application domains” such as discussed in section 1’s first paragraph including logged credit card transaction data or logged computer network traffic data (i.e., examples of a process as recited)), comprising: receiving a first usage sequence of the process (page 417’s sections 4 and 4.1: “training set”, which the Examiner understands to be inclusive of logged data (as discussed per section 1’s first paragraph) such that this data can be evaluated per anomaly detection, and such that this data could be understood to include instances of data as logged that are equivalent to “a first number of ... usage sequences”, which then if used to train per sections 4-4.1 then amount to “training usage sequences” as recited, and where all the data as logged and available could be understood to represent a recited “first number” of such data instances)
but not the further limitations for determining a prediction vector based on the first usage sequence using a trained machine learning algorithm trained by the computer-implemented method of claim 3, the prediction vector including a corresponding value for each respective process tree among the second number of process trees, the corresponding value indicating whether the first usage sequence fits the respective process tree and determining a normalized fitness value from the prediction vector and the second number and classifying the process based on the normalized fitness value. Rather, the Examiner relies upon DIXIT to teach what Liu otherwise lacks, see e.g., Dixit: the vectorization of the process data as taught per Dixit’s [0111]-[0112]; and data is subject to normalization per Dixit’s [0296], [0300], and [0304] to facilitate decision making, and where the decision is made in part based on a “fitness metric” per Dixit’s [0054].
Dixit and Liu contemplate the same/similar problem/challenge in the state of the art (e.g., anomaly detection based on logged data using machine-learning approaches). Hence, they are highly analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Dixit’s vectorized approach into Liu’s framework, with a reasonable expectation of success, such that advantages related to Dixit’s data management aspects can be realized in a framework like Liu.
Conclusion
17. The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure:
Non-Patent Literature “Efficient Inference of Optimal Decision Trees”
Non-Patent Literature “Process Mining Explained”
Non-Patent Literature “Improvement of ID3 Algorithm Based on Simplified Information Entropy and Coordination Degree”
Non-Patent Literature “Understanding Random Forests From Theory to Practice”
Non-Patent Literature “New Techniques for Mining Frequent Patterns in Unordered Trees”
18. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHOURJO DASGUPTA whose telephone number is (571)272-7207. The examiner can normally be reached M-F 8am-5pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHOURJO DASGUPTA/Primary Examiner, Art Unit 2144