DETAILED ACTION
This Non Final Office Action is in response to Application filed on 11/01/2024.
Claims 1-20 filed on 11/01/2024 are being considered on the merits.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings filed on 11/01/2024 are accepted.
Specification
The disclosure is objected to because of the following informalities: [0024] recites “…each represents particular type…”, should be “…each represents a particular type…”.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 9-14, 17-18, and 19-20 of US Patent US 12182257 B2 A, hereinafter 257.
Instant Application 18935067
US 12182257 B2 A
1. A system for classifying event data as anomalous, the system comprising: a processor; and a memory device that stores program code structured to cause the processor to:
1. A system for classifying event data as anomalous, the system comprising: a processor; and a memory device that stores program code to be executed by the processor, the program code comprising: a trained autoencoder model configured to:
autoencode a multivariate input feature vector to generate a first output, the multivariate input feature vector comprising input feature elements associated with event data;
autoencode a multivariate input feature vector to generate a first output, the multivariate input feature vector comprising input feature elements associated with event data;
generate a predicted multivariate feature vector based on the first output, the predicted multivariate feature vector comprising predicted feature elements corresponding to the input feature elements of the multivariate input feature vector;
and generate a predicted multivariate feature vector based on the first output, the predicted multivariate feature vector comprising predicted feature elements corresponding to the input feature elements of the multivariate input feature vector;
and in response to the predicted multivariate feature vector being classified as anomalous based on a reconstruction loss associated with the predicted multivariate feature vector: determine a percentage of contribution to the reconstruction loss by a first predicted feature element, and select the first predicted feature element as a cause for the anomalous classification based on the percentage of contribution to the reconstruction loss by the first predicted feature element.
Claims 8 and 15
an underprediction determiner configured to: determine whether a first predicted feature element of the predicted multivariate feature vector is an underprediction of a corresponding input feature element of the multivariate input feature vector; a percent contribution determiner configured to: in response to the predicted multivariate feature vector being classified as anomalous based on a reconstruction loss associated with the predicted multivariate feature vector, determine a percentage of contribution to the reconstruction loss by the first predicted feature element; and a reason generator configured to: in response to classifying the predicted multivariate feature vector as anomalous based on the reconstruction loss, select the first predicted feature element as a likely cause for the anomalous classification based on at least one of: a determination that the first predicted feature element is an underprediction of the corresponding input feature element, or the percentage of contribution to the reconstruction loss by the first predicted feature element.
Claims 11 and 19
2. The system of claim 1, wherein, to determine the percentage of contribution to the reconstruction loss by the first predicted feature element, the program code is structured to cause the processor to: determine a difference between a value of the first predicted feature element and a value of the corresponding input feature element; and divide the difference by the reconstruction loss associated with the predicted multivariate feature vector to determine the percentage of contribution to the reconstruction loss by the first predicted feature element.
Claims 9 and 16
10. The system of claim 1, wherein, to determine the percentage of contribution to the reconstruction loss by the first predicted feature element, the percent contribution determiner is further configured to: determine a difference between a value of the first predicted feature element and a value of the corresponding input feature element; and divide the difference by the reconstruction loss associated with the predicted multivariate feature vector to determine the percentage of contribution to the reconstruction loss by the first predicted feature element.
Claims 17 and 20
The system of claim 1, wherein the program code is structured to further cause the processor to: determine whether the first predicted feature element of the predicted multivariate feature vector is an underprediction of a corresponding input feature element of the multivariate input feature vector; and select the first predicted feature element as a cause for the anomalous classification further based on a determination that the first predicted feature element is an underprediction of the corresponding input feature element.
Claims 10 and 17
…an underprediction determiner configured to: determine whether a first predicted feature element of the predicted multivariate feature vector is an underprediction of a corresponding input feature element of the multivariate input feature vector; a percent contribution determiner configured to: in response to the predicted multivariate feature vector being classified as anomalous based on a reconstruction loss associated with the predicted multivariate feature vector, determine a percentage of contribution to the reconstruction loss by the first predicted feature element; and a reason generator configured to: in response to classifying the predicted multivariate feature vector as anomalous based on the reconstruction loss, select the first predicted feature element as a likely cause for the anomalous classification based on at least one of: a determination that the first predicted feature element is an underprediction of the corresponding input feature element, or the percentage of contribution to the reconstruction loss by the first predicted feature element.
Claims 11 and 19
4. The system of claim 1, wherein the program code is structured to further cause the processor to: return a reason for the anomalous classification of the predicted multivariate feature vector based on the first predicted feature element.
Claims 11 and 18
2. The system of claim 1, wherein the reason generator further configured to: return a reason for the anomalous classification of the predicted multivariate feature vector based on the first predicted feature element.
Claim 12
5. The system of claim 1, wherein the multivariate input feature vector and the predicted multivariate feature vector are associated with a corresponding user and a corresponding user session.
Claims 12 and 19
3. The system of claim 1, wherein the multivariate input feature vector and the predicted multivariate feature vector are associated with a corresponding user and a corresponding user session.
Claim 13
6. The system of claim 1, wherein the program code is structured to further cause the processor to: retrieve the event data from event logs; generate, based on the event data, a plurality of multivariate training input feature vectors and a plurality of multivariate test input feature vectors, wherein the plurality of multivariate test input feature vectors comprises the multivariate input feature vector; and train the autoencoder model based on the plurality of multivariate training input feature vectors to generate the trained autoencoder model.
Claims 13 and 20
4. The system of claim 1, wherein the program code further comprises: a data retrieval manager configured to: prior to autoencoding the multivariate input feature vector based on the trained autoencoder model, retrieve event data based on event logs; a data aggregation engine configured to: aggregate the event data and generate a plurality of multivariate training input feature vectors and a plurality of multivariate test input feature vectors, wherein the plurality of multivariate test input feature vectors comprises the multivariate input feature vector; and a model training manager configured to: train the autoencoder model based on the plurality of multivariate training input feature vectors to generate the trained autoencoder model.
Claim 14
7. The system of claim 1, wherein the event data comprises at least one of: data logged from user sign-in sessions; or data logged from application sessions.
Claim 14
9. The system of claim 1, wherein the event data comprises at least one of: data logged from user sign-in sessions; or data logged from application sessions.
Claim 18
Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 1-4, 9-14, 17-18, and 19-20 of 257 contains every element of claims 1-20 of the instant application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-6, 8-9, 11-13, 15-16, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vaidya (US 20200380531 A1) in view of Pierri (US 20210349897 A1).
Regarding claim 1, Vaidya teaches a system for classifying event data as anomalous (Vaidya Figure 2 AB anomaly detector), the system comprising:
a processor; and a memory device that stores program code (Vaidya [0024] “one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions”) structured to cause the processor to:
autoencode a multivariate input feature vector to generate a first output, the multivariate input feature vector comprising input feature elements associated with event data (Vaidya [0024] “receiving a set of input data, wherein the set of input data is associated with an entity; automatically obtaining, based on the received set of input data, a set of derived data, wherein the set of derived data is associated with the entity; obtaining, based on the set of derived data, a plurality of feature values corresponding to a plurality of features; providing the plurality of feature values (i.e. multivariate input feature vector) to an autoencoder-decoder to obtain a plurality of feature-specific reconstruction errors”, where the set of input/event data is associated with an entity, further in Figure 2AB [0057] “With reference to FIG. 2A, the detector comprises two recurrent neural networks 202 and 204 attached together with a single layer as the join area, the embedded space 206. The task of the encoder 202 is to compress the input 208 to the embedded vector space 206 (i.e. first output), while the task of the decoder 204 is to decompress the embedded vector back to the dimension of the original input. Specifically, the encoder 202 reduces the dimensionality of a large input 208 by compressing the input, layer by layer, until it is some embedded space size.”);
generate a predicted multivariate feature vector based on the first output, the predicted multivariate feature vector comprising predicted feature elements corresponding to the input feature elements of the multivariate input feature vector (Vaidya [0057] “With reference to FIG. 2A, the detector comprises two recurrent neural networks 202 and 204 attached together with a single layer as the join area, the embedded space 206. The task of the encoder 202 is to compress the input 208 to the embedded vector space 206 (i.e. first output), while the task of the decoder 204 is to decompress the embedded vector back to the dimension of the original input. Specifically, the encoder 202 reduces the dimensionality of a large input 208 by compressing the input, layer by layer, until it is some embedded space size. The decoder 204 operates to decompress the compressed input in the embedded space 206 to output a reconstructed version of the input. The detector 200 learns by minimizing the reconstruction error between the input 208 and the output 210 (i.e. predicted multivariate feature vector).”, where 210 is based on the output 206); and
in response to the predicted multivariate feature vector being classified as anomalous based on a reconstruction loss associated with the predicted multivariate feature vector (Vaidya [0057] “The detector 200 learns by minimizing the reconstruction error (i.e. reconstruction loss) between the input 208 and the output 210. In some embodiments, the reconstruction error is based on the reconstruction error between each of the features in the input (i.e., features 1, 2, . . . , m) and each of the corresponding features in the output (i.e., reconstructed features 1, 2, . . . , m).”, [0058] “The architecture of the RNN variational autoencoder-decoder performs unsupervised learning. It is well-suited for learning anomaly detection”, [0075] “…the larger the reconstruction error, the more anomalous the input is. ”):
determine a [[percentage of]] contribution to the reconstruction loss by a first predicted feature element, and select the first predicted feature element as a cause for the anomalous classification based on the [[percentage of]] contribution to the reconstruction loss by the first predicted feature element (Vaidya [0075] “…the larger the reconstruction error, the more anomalous the input is. By identifying the feature(s) with the largest feature-wise reconstruction error(s), the platform can determine what specific feature(s) of the input are the most anomalous from the typical dataset.” Further in [0078] and [0080] “…the feature-specific reconstruction errors provide insight into exactly which features have contributed to the higher total reconstruction error.”, [0087] “…the high risk areas represent the features associated with the highest feature-specific reconstruction errors. The user interface can further provide a risk score for a particular risk area, which can be calculated based on the associated reconstruction error and one or more thresholds (either default or user-specified).”, where the risk score for a particular area associated with feature-specific reconstruction errors correspond to the determined contribution and accordingly selected to be displayed in the merchant-specific reporting user interface 502 in Figure 5B).
Vaidya discloses scores which can be construed as percentage contribution; however, Vaidya does not explicitly disclose percentage. Emphasis in italic.
Pierri discloses determine a percentage of contribution to the reconstruction loss by a first predicted feature element and select the first predicted feature element as a cause for the anomalous classification based on the percentage of contribution to the reconstruction loss (Pierri [0070] “The inference comprises inputting time series data to the trained machine learning model and receiving from the trained machine learning model an output indicating that values of a second group of one or more of the measurands of a subset of the received sensor data indicates an anomaly. As illustrated in FIG. 3, embodiments of the present invention can utilize a model reconstruction error to provide an anomaly score for each input measurand of the trained machine learning model. Points (i.e., measurands) with a high reconstruction error (i.e., far from norm) are anomalies. Further embodiments of the present invention can perform post-processing 306 of the anomaly scores in order to obtain normalized anomaly scores. For example, the post-processing may be performed using a min-max scaler to re-scale the score to be in the range [0,1].”, where the normalized score between [0,1] range corresponds to a normalized percentage [0,100]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vaidya to incorporate the teaching of Pierri to utilize the above feature, with the motivation of performing normalized anomaly extraction with identified min. and max. scores and accordingly a root cause analysis Is performed, as recognized by (Pierri [0070-0071]).
Regarding claim 8, claim 8 recites similar limitations to claim 1, therefore rejected with the same rationale and motivation applied to claim 1.
Regarding claim 15, claim 8 recites similar limitations to claim 1, therefore rejected with the same rationale and motivation applied to claim 1.
Regarding claim 2, Vaidya in view of Pierri teaches the system of claim 1.
Vaidya discloses scores which can be construed as percentage contribution since the score is proportional to the reconstruction error and anomalies; however, Vaidya does not explicitly disclose percentage.
Pierri discloses wherein, to determine the percentage of contribution to the reconstruction loss by the first predicted feature element, the program code is structured to cause the processor to: determine a difference between a value of the first predicted feature element and a value of the corresponding input feature element; and divide the difference by the reconstruction loss associated with the predicted multivariate feature vector to determine the percentage of contribution to the reconstruction loss by the first predicted feature element (Pierri [0070] “The inference comprises inputting time series data to the trained machine learning model and receiving from the trained machine learning model an output indicating that values of a second group of one or more of the measurands of a subset of the received sensor data indicates an anomaly. As illustrated in FIG. 3, embodiments of the present invention can utilize a model reconstruction error to provide an anomaly score for each input measurand of the trained machine learning model. Points (i.e., measurands) with a high reconstruction error (i.e., far from norm) are anomalies. Further embodiments of the present invention can perform post-processing 306 of the anomaly scores in order to obtain normalized anomaly scores. For example, the post-processing may be performed using a min-max scaler to re-scale the score to be in the range [0,1].”, where the normalized score between [0,1] range corresponding to the normalized percentage [0,100], examiner notes that in order to calculate the normalized percentage score [0,1], which is based on the ratio between reconstruction error, i.e. difference, divided by the total output, where this is a basic and obvious mathematical calculation to determine a normalized percentage output in the range [0,1]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vaidya to incorporate the teaching of Pierri to utilize the above feature, with the motivation of performing normalized anomaly extraction with identified min. and max. scores and accordingly a root cause analysis Is performed, as recognized by (Pierri [0070-0071]).
Regarding claim 9, claim 9 recites similar limitations to claim 2, therefore rejected with the same rationale and motivation applied to claim 2.
Regarding claim 16, claim 16 recites similar limitations to claim 2, therefore rejected with the same rationale and motivation applied to claim 2.
Regarding claim 4, Vaidya in view of Pierri teaches the system of claim 1, wherein the program code is structured to further cause the processor to: return a reason for the anomalous classification of the predicted multivariate feature vector based on the first predicted feature element (Vaidya [0075] “Since the reconstruction error represents the overall difference between the input and the output of the detector, a perfect reconstruction would be one that yields no difference between the input and the output (i.e., the total reconstruction error is zero). On the other hand, the larger the reconstruction error, the more anomalous the input is. By identifying the feature(s) with the largest feature-wise reconstruction error(s), the platform can determine what specific feature(s) of the input are the most anomalous from the typical dataset.”, [0076] “The autoencoder-decoder allows for real-time updating of risk as a merchant receives transactions. Specifically, the merchant's statement information is stored and used to provide context to the transactional information to try to learn not only whether or not an individual transaction is anomalous for this merchant, but to detect if the pattern of transactions is anomalous for this merchant. To accomplish this, each of the encoder and the decoder networks is implemented as a RNN, which provides information about the previous states of the system to future predictions. This allows the platform to see whether this transaction pattern is anomalous for this type of merchant and be able to have the NN make comparisons about other merchants of the same type.”)
Regarding claim 11, claim 11 recites similar limitations to claim 4, therefore rejected with the same rationale and motivation applied to claim 4.
Regarding claim 18, claim 18 recites similar limitations to claim 4, therefore rejected with the same rationale and motivation applied to claim 4.
Regarding claim 5, Vaidya in view of Pierri teaches the system of claim 1, wherein the multivariate input feature vector and the predicted multivariate feature vector are associated with a corresponding user and a corresponding user session ([0054] “…the selected entity-specific data 112 comprises data related to an activity (e.g., transaction data). Transactional data provides information related to activities of an entity (e.g., a merchant) and can be indicative of how the business is performing…the transaction data includes transaction amount (e.g., for a sale), a refund amount, type of card (e.g., VISA, MasterCard), entry mode (e.g., card, e-commerce), authorization source (i.e., the way in which the transaction was authorized), cardholder authorization method (e.g., PIN, signature), type of the associated terminal, capacity of the associated terminal, purpose of the transaction (e.g., debit, credit, cash advance), the number of times the card has been seen at the same business, the percentage of transactions that are from the same set of cards at the same business, the time difference between two transactions, the number of attempts associated with the transaction, information related to the cardholder, information related to the card (e.g., bank), or any combination thereof.”, merchant corresponds to the user/entity and the session associated with the merchant is the session pertaining to the session when the above disclosed merchant transaction is being processed).
Regarding claim 12, claim 12 recites similar limitations to claim 5, therefore rejected with the same rationale and motivation applied to claim 5.
Regarding claim 19, claim 19 recites similar limitations to claim 5, therefore rejected with the same rationale and motivation applied to claim 5.
Regarding claim 6, Vaidya in view of Pierri teaches the system of claim 1, wherein the program code is structured to further cause the processor to: retrieve the event data from event logs (Vaidya [0045] “…the platform receives input data relating to an entity 102. The entity can be a merchant (e.g., an e-commerce company, a physical store). In some embodiments, the platform provides one or more user interfaces (e.g., via a web portal) that allow a representative of the entity to enter the entity-specific data 102 as a part of completing a merchant application form. In some embodiments, the entity-specific data 102 comprises basic information about the entity, such as: business name, business location(s), business license(s), type of transaction supported, ownership structure, information about the owners, information about the employees, tax identification number, social security number, transaction volume (e.g., monthly), or any combination thereof.”, [0054] “…the selected entity-specific data 112 comprises data related to an activity (e.g., transaction data). Transactional data provides information related to activities of an entity (e.g., a merchant) and can be indicative of how the business is performing…the transaction data includes transaction amount…”);
generate, based on the event data, a plurality of multivariate training input feature vectors and a plurality of multivariate test input feature vectors, wherein the plurality of multivariate test input feature vectors comprises the multivariate input feature vector (Vaidya [0072] “…a batching strategy is implemented on the training data to avoid overfitting of merchants with relatively large amounts of transactions in comparison to other merchants. The batch size is a hyper-parameter that defines the number of samples to work through before updating the internal model parameters. If the detector is trained based on a training set that includes more data from merchants who have a larger number of transactions, it is not as accurate on merchants who have a smaller number of transactions. Thus, a batching strategy is implemented to force merchants with large number of transactions to be batched with merchants that have few transactions with some ratio. This means that, at the end of the batch, the learning applied (i.e., the updating of the internal model parameters) will be evenly distributed across merchants with large numbers of transactions and those with small numbers of transactions.”); and
train the autoencoder model based on the plurality of multivariate training input feature vectors to generate the trained autoencoder model (Vaidya [0073-0074] “Feeding the training data 116 into the autoencoder-decoder 114 forces it to learn how to compress normal data… Accordingly, abnormal data (e.g., indicative of fraudulent merchant, fraudulent transaction of a fraudulent or non-fraudulent merchant) can be detected based on the magnitude of the reconstruction error.”).
Regarding claim 13, claim 13 recites similar limitations to claim 6, therefore rejected with the same rationale and motivation applied to claim 6.
Regarding claim 20, claim 20 recites similar limitations to claim 6, therefore rejected with the same rationale and motivation applied to claim 6.
Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Vaidya (US 20200380531 A1) in view of Pierri (US 20210349897 A1) and Pushpala (US 20210321942 A1).
Regarding claim 3, Vaidya in view of Pierri teaches the system of claim 1.wherein the program code is structured to further cause the processor to:
Vaidya discloses reconstruction error between the input and the reconstructed input, where the higher reconstruction error indicates anomalies. The higher construction error indicates two scenarios: 1) the reconstructed input values are much less than the input, i.e. underprediction, or 2) the reconstructed input values are much higher than the input, i.e. overprediction, consistent with the description of underprediction in the instant application in [0028] “…For each feature, the system checks whether the reconstructed value is higher or lower than the original session value to determine overprediction or underprediction respectively.”
Vaidya discloses that any of these two scenarios indicates anomalies. Therefore, it would have been obvious to one of ordinary skill in the art to conceive of any of the one of two scenarios, i.e. underprediction or overprediction. However, Vaidya in view of Pierri does not explicitly disclose the below limitation.
Pushpala discloses determine whether the first predicted feature element of the predicted multivariate feature vector is an underprediction of a corresponding input feature element of the multivariate input feature vector; and select the first predicted feature element as a cause for the anomalous classification further based on a determination that the first predicted feature element is an underprediction of the corresponding input feature element (Pushpala discloses in [0221] “ a method for detecting sensor anomalies can include comparing sensor data obtained during a particular time period to a prediction of the sensor data for that time period. The prediction can be generated using any of the methods described herein (e.g., in connection with FIGS. 10-12). If the actual sensor data differs significantly from the predicted sensor data (e.g., the health parameter value generated from the actual sensor data is significantly higher or lower than the prediction for that value), the method can determine that a sensor anomaly has occurred. Alternatively or in combination, the method can detect sensor anomalies using data from other sensors and/or devices, such as motion sensors (e.g., accelerometers, gyroscopes), heart rate sensors, temperature sensors, location sensors, pressure sensors, optical sensors, etc.… obtaining data from a plurality of sensors and/or devices to assess the user's current state, surrounding environment, and/or other contextual information, and using the contextual information to detect the likelihood of sensor anomalies feature…uses trained machine learning models to identify instances of sensor anomalies. If the method detects that a sensor anomaly is occurring or is likely to occur, the method can modify and/or exclude anomalous sensor data, e.g., by alter the operating parameters (e.g., filtering parameters), omitting sensor data from certain time period, etc.”, where features of a sensor, out of the features of plurality of sensors, causing the anomalies are detected/determined, i.e. feature associated with one of the plurality of sensors, and accordingly selecting the features causing the anomalies, based on the fact that (the health parameter value generated from the actual sensor data is significantly higher or lower than the prediction for that value), and performing operations on them, e.g. filtering, modifying, etc., consistent with the description of underprediction in the instant application in [0028]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vaidya in view of Pierri to incorporate the teaching of Pushpala to utilize the above feature, with the motivation of detecting conditions that are likely to lead to sensor dropout or other anomalies, as recognized by (Pushpala [0100, 0221]).
Regarding claim 10, claim 10 recites similar limitations to claim 3, therefore rejected with the same rationale and motivation applied to claim 3.
Regarding claim 17, claim 17 recites similar limitations to claim 3, therefore rejected with the same rationale and motivation applied to claim 3.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Vaidya (US 20200380531 A1) in view of Pierri (US 20210349897 A1) and Aydore (US 11537902 B1).
Regarding claim 7, Vaidya in view of Pierri teaches the system of claim 1.
Vaidya in view of Pierri does not disclose the below limitation.
Aydore discloses wherein the event data comprises at least one of: data logged from user sign-in sessions; or data logged from application sessions (Aydore Col. 8 line 25-29 “…user information included in the incoming request (e.g., user account information, such as a user name, an account identifier, etc.) may be used to find user contact information, such as a mobile phone number or e-mail address, in a user database. ”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vaidya in view of Pierri to incorporate the teaching of Aydore to utilize the above feature, with the motivation of detecting anomalous events from categorical data using autoencoders based on user information, as recognized by (Aydore 15-29).
Regarding claim 14, claim 14 recites similar limitations to claim 7, therefore rejected with the same rationale and motivation applied to claim 7.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Umeno (US 20200074238 A1) discloses “…as the degree of difference between the input image and the training image (reconstructed image) is larger, the anomaly score is closer to “1.0”, and as the degree of difference between the input image and the training image (reconstructed image) is smaller, the anomaly score is closer to “0.0”. When the anomaly score is calculated using an autoencoder, a plurality of terms are calculated from the correlation between the pixel values of the input image and the reconstructed image.”
Stergioudis (US 20210400075 A1) discloses “The data may be considered as anomalous if the reconstruction error is higher than a threshold which has been optimized using historical data (e.g., the training dataset 345). The intuition may be that legitimate transactions may have low reconstruction errors (since the auto-encoder 432 was trained to accurately reconstruct them) while anomalous transactions can have higher errors.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BASSAM A NOAMAN whose telephone number is (571)272-2705. The examiner can normally be reached Monday-Friday 8:30 AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A. Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BASSAM A NOAMAN/Primary Examiner, Art Unit 2497