Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2 are rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman (US 20240330927 A1) in further view of Huber (US 20210398128 A1).
With respect to claim 1, Abdelrahman teaches A system, comprising: a computing device comprising a processor and a memory ([0252] The one or more processors 1612 are components that execute instructions, such as instructions that obtain data, process the data, and provide output based on the processing. The one or more processors 1612 often obtain instructions and data stored in the memory 1614.); and
machine-readable instructions stored in the memory that, when executed by the processor, cause the computing device to at least ([0252] The one or more processors 1612 are components that execute instructions, such as instructions that obtain data, process the data, and provide output based on the processing. The one or more processors 1612 often obtain instructions and data stored in the memory 1614.):
filter data generated by execution of a distributed agent, hosted on a distributed ledger, for at least a trace of a transaction associated with a third-party large language model (Abdelrahman ¶ [0060] In some examples, the text 53 instructions for intended function and/or operation of the smart contract on a requested distributed ledger, ¶ [0169] A log collector 908 can be a software program configured to receive data from agents (e.g., directly or via the agent handler 906) and store the data, such as in a data store. To do so, the log collector 908 may include an endpoint for interacting with a plurality of distributed agents, ¶ [0176] In some embodiments, the AI system 910 may perform one or more of the following: anomaly detection to determine anomalies in transaction traffic … More generally, use cases to which the AI system 910 may be applied may involve traditional AI/ML models for classification, regression, clustering, pattern detection, anomaly detection, or also more advanced reinforcement learning settings for online network optimization, ¶[0257] FIG. 17 A machine learning framework 1700 is a collection of software and data that implements artificial intelligence trained to provide output, such as predictive data, based on input. Examples of artificial intelligence that can be implemented with machine learning way include neural networks (including recurrent neural networks), language models (including so-called “large language models”), generative models, natural language processing models, adversarial networks, decision trees);
use a trained large language model to analyze the at least one trace of the transaction (Abdelrahman ¶ [0176] In some embodiments, the AI system 910 may perform one or more of the following: anomaly detection to determine anomalies in transaction traffic … More generally, use cases to which the AI system 910 may be applied may involve traditional AI/ML models for classification, regression, clustering, pattern detection, anomaly detection, or also more advanced reinforcement learning settings for online network optimization, ¶[0257] FIG. 17 A machine learning framework 1700 is a collection of software and data that implements artificial intelligence trained to provide output, such as predictive data, based on input. Examples of artificial intelligence that can be implemented with machine learning way include neural networks (including recurrent neural networks), language models (including so-called “large language models”), generative models, natural language processing models, adversarial networks, decision trees);
identify at least one anomaly related to the at least one trace of the transaction (Abdelrahman ¶ [0176] In some embodiments, the AI system 910 may perform one or more of the following: anomaly detection to determine anomalies in transaction traffic … More generally, use cases to which the AI system 910 may be applied may involve traditional AI/ML models for classification, regression, clustering, pattern detection, anomaly detection, or also more advanced reinforcement learning settings for online network optimization, ¶[0257] FIG. 17 A machine learning framework 1700 is a collection of software and data that implements artificial intelligence trained to provide output, such as predictive data, based on input. Examples of artificial intelligence that can be implemented with machine learning way include neural networks (including recurrent neural networks), language models (including so-called “large language models”), generative models, natural language processing models, adversarial networks, decision trees); and
Abdelrahman does not explicitly disclose however, Huber teaches block the transaction based at least in part on the at least one anomaly (Huber ¶ [0088] As illustrated, the storage of the transaction records that include data representing metadata related to a denied transaction includes a variety of advantages.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman to include the blocking of Huber in order to provide security advantage ([0025], Huber);
With respect to claim 2, Huber further teaches wherein the machine-readable instructions further cause the computing device to at least record, on a distributed ledger, the details, including transaction metadata, associated with the blocking of the transaction (Hubert ¶ [0088] As illustrated, the storage of the transaction records that include data representing metadata related to a denied transaction includes a variety of advantages.)
Claim(s) 3 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, Huber in further view of Tholar (US 20240193608 A1).
With respect to claim 3, Abdelrahman and Huber do not explicitly disclose however, Tholar teaches wherein the trained large language model is trained at least in part on preexisting data to differentiate between normal transactions and anomalous transactions (Tholar ¶ [0046] According to some embodiments of the present disclosure, the training may include retrieving from the data store, such as transactions store 105, a dataset of fraud-labeled transactions to train a model, such as fraud model 110, on the dataset of fraud-labeled transactions, to mark transactions as ‘similar’ or ‘novel’, and then retrieving from the data store a dataset of legit-labeled transactions to train a legit model 120 on the dataset of legit-labeled transactions, to mark transactions as ‘similar’ or ‘novel’.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of blocking of Huber to include differentiation of Tholar in order to identify fraud transactions in transactions classified as legit transactions ([0003], Tholar);
Claim(s) 4 and 8 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, Huber in further view of Li (US 20190236609 A1).
With respect to claim 4, none of Abdelrahman and Huber explicitly disclose however Li teaches wherein the machine-readable instructions further cause the computing device to at least retrain the trained large language model based at least in part on details, including transaction metadata, associated with blocking of the transaction (Li ¶ [0033] A computing platform can obtain the fraudulent sample and the normal sample as described above, and each sample includes a user operation sequence and a corresponding time sequence. The computing platform trains the fraudulent transaction detection model based on the operation sequence and the time sequence. More specifically, the user operation sequence and the corresponding time sequence [metadata] are processed by using a convolutional neural network, to train the fraudulent transaction detection model, ¶ [0143] An implementation of fine-tuning process involves the use of fraudulent transaction information. In this case, the model is fine-tuned to recognize fraudulent transactions. Using known information of fraudulent transactions, i.e., labels. An input of the model is constructed as sequence of transactions made by a customer, and the fraudulent labels are used to mark each transaction as legit or frauds. The model is fine-tuned to classify each transaction in the input as fraudulent or no,¶ [0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model. The word embedding model is a model used in natural language processing (NLP), and is used to convert a single word into a vector. In the simplest model, a group of features are constructed for each word to serve as corresponding vectors. Further, to reflect the relationship between words, for example, a category relationship or a subordinate relationship, a language model can be trained in various methods, to optimize vector expression.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of blocking of Huber to include retraining of Li in order for more accurate and more comprehensive fraudulent transaction detection ([0073], Li);
With respect to claim 8, none of Abdelrahman and Huber explicitly disclose however Li teaches wherein the trained large language model analyzes the at least one trace of the transaction by transforming the at least one trace of the transaction into at least one normalized vector representation (Li ¶[0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of blocking of Huber to include retraining of Li in order for more accurate and more comprehensive fraudulent transaction detection ([0073], Li);
Claim(s) 5 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, Huber in further view of Douglas (US 20150193776 A1).
With respect to claim 5, none of Abdelrahman and Huber explicitly disclose however Douglas teaches wherein the machine-readable instructions that cause the computing device to block the transaction further cause the computing device to block the transaction when a predefined number of anomalies is reached (Douglas ¶ [0067] If customer 122's location is not verified ("no" in step 540), FSP server 111 determines whether to retry verifying location in step 542. If FSP server 111 decides not to retry ("no" in step 542) when, for example, a predetermined number of retries have been attempted, the FSP server 111 may block the electronic transaction and deny the mobile payment in step 584, thereby ending process 500. Alternatively, FSP server 111 may retry verifying location ("yes" in step 542) to account for possible errors in the initial location data transmissions. To retry verifying location, process 500 returns to step 530.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of blocking of Huber to include blocking of Douglas in order to add an extra layer of security ([0840], Douglas);
Claim(s) 6 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, Huber in further view of Ruan (US 20250117797 A1).
With respect to claim 6, none of Abdelrahman and Huber explicitly disclose however Ruan teaches wherein the machine-readable instructions that cause the computing device to block the transaction further cause the computing device to block the transaction based at least in part on an anomaly reaching a certain ranking (Ruan ¶[0070] At block 306, the transaction authenticator 118 selects a subset of the list of fraud reasons as ranked at block 304. The subset of the list can be based on identifying a selected number of highest ranked fraud reasons. For example, in some embodiments, at block 306 the top three reasons, top two reasons, or the top reason can be selected from the list of fraud reasons. It is to be appreciated that the above numbers are examples and that the actual number can vary beyond the top three reasons, ¶ [0083] Referring again to FIG. 4, at block 310, the transaction authenticator 118 outputs an indication whether to approve or decline the transaction attempt based on the determining at block 308. ).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of blocking of Huber to include ranking of Ruan in order to minimize customer and revenue loss ([0017], Ruan);
Claim(s) 7 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, Huber in further view of Duke (US 20150161611 A1).
With respect to claim 7, none of Abdelrahman and Huber explicitly disclose however Duke teaches wherein the anomaly is identified at least in part due to a similarity the anomaly has with at least one preexisting anomaly (Duke ¶[0007] computing a self-similarity score in response to a computed fraud score that is above a predetermined threshold, the self-similarity score comprising a similarity measure of the received transaction relative to a set of prior transactions in the data storage relating to the account, wherein the computed self-similarity score indicates similarity of the received transaction to other transactions of the account in the set of prior transactions; and [0008] determining the suggested action based on the computed fraud score and the computed self-similarity score.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of blocking of Huber to include similarity of Duke in order to control risk and customer experience ([0056], Duke);
Claim(s) 9, 10, 13-14 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, in further view of Li, Huber and Duke.
With respect to claim 9 Abdelrahman teaches A method, comprising: filtering data from a distributed ledger for at least one trace of a transaction associated with a third-party large language model (Abdelrahman ¶ [0060] In some examples, the text 53 instructions for intended function and/or operation of the smart contract on a requested distributed ledger, ¶ [0169] A log collector 908 can be a software program configured to receive data from agents (e.g., directly or via the agent handler 906) and store the data, such as in a data store. To do so, the log collector 908 may include an endpoint for interacting with a plurality of distributed agents, ¶ [0176] In some embodiments, the AI system 910 may perform one or more of the following: anomaly detection to determine anomalies in transaction traffic … More generally, use cases to which the AI system 910 may be applied may involve traditional AI/ML models for classification, regression, clustering, pattern detection, anomaly detection, or also more advanced reinforcement learning settings for online network optimization, ¶[0257] FIG. 17 A machine learning framework 1700 is a collection of software and data that implements artificial intelligence trained to provide output, such as predictive data, based on input. Examples of artificial intelligence that can be implemented with machine learning way include neural networks (including recurrent neural networks), language models (including so-called “large language models”), generative models, natural language processing models, adversarial networks, decision trees);
Abdurrahman does not explicitly disclose however Li teaches converting, using a trained large language model, the at least one trace of the transaction into at least one vector representation (Li ¶ [0033] A computing platform can obtain the fraudulent sample and the normal sample as described above, and each sample includes a user operation sequence and a corresponding time sequence. The computing platform trains the fraudulent transaction detection model based on the operation sequence and the time sequence. More specifically, the user operation sequence and the corresponding time sequence [metadata] are processed by using a convolutional neural network, to train the fraudulent transaction detection model, ¶ [0143] An implementation of fine-tuning process involves the use of fraudulent transaction information. In this case, the model is fine-tuned to recognize fraudulent transactions. Using known information of fraudulent transactions, i.e., labels. An input of the model is constructed as sequence of transactions made by a customer, and the fraudulent labels are used to mark each transaction as legit or frauds. The model is fine-tuned to classify each transaction in the input as fraudulent or no,¶ [0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model. The word embedding model is a model used in natural language processing (NLP), and is used to convert a single word into a vector. In the simplest model, a group of features are constructed for each word to serve as corresponding vectors. Further, to reflect the relationship between words, for example, a category relationship or a subordinate relationship, a language model can be trained in various methods, to optimize vector expression.);
identifying at least in part one anomaly related to the at least one vector representation (Li¶[0033] A computing platform can obtain the fraudulent sample and the normal sample as described above, and each sample includes a user operation sequence and a corresponding time sequence. The computing platform trains the fraudulent transaction detection model based on the operation sequence and the time sequence. More specifically, the user operation sequence and the corresponding time sequence [metadata] are processed by using a convolutional neural network, to train the fraudulent transaction detection model [anomaly], ¶ [0143] An implementation of fine-tuning process involves the use of fraudulent transaction information. In this case, the model is fine-tuned to recognize fraudulent transactions. Using known information of fraudulent transactions, i.e., labels. An input of the model is constructed as sequence of transactions made by a customer, and the fraudulent labels are used to mark each transaction as legit or frauds. The model is fine-tuned to classify each transaction in the input as fraudulent or no,¶[0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model. The word embedding model is a model used in natural language processing (NLP), and is used to convert a single word into a vector. In the simplest model, a group of features are constructed for each word to serve as corresponding vectors. Further, to reflect the relationship between words, for example, a category relationship or a subordinate relationship, a language model can be trained in various methods, to optimize vector expression.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman to include vector of Li in order for more accurate and more comprehensive fraudulent transaction detection ([0073], Li);
None of Abdelrahman and Li explicitly disclose however Huber teaches blocking the transaction based at least in part (Huber ¶ [0088] As illustrated, the storage of the transaction records that include data representing metadata related to a denied transaction includes a variety of advantages) [[on a similarity between the at least one anomaly related to the at least one vector representation and a preexisting anomaly related to a stored vector representation]]
recording, on the distributed ledger, details, including transaction metadata, of the blocking of the transaction (Huber ¶ [0088] As illustrated, the storage of the transaction records that include data representing metadata related to a denied transaction includes a variety of advantages.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of vector of Li to include blocking of Huber in Huber in order to provide security advantage ([0025], Huber);
None of Abdelrahman, Li and Huber explicitly disclose however Duke teaches on a similarity between the at least one anomaly related to the at least one vector representation and a preexisting anomaly related to a stored vector representation (Duke ¶[0007] computing a self-similarity score in response to a computed fraud score that is above a predetermined threshold, the self-similarity score comprising a similarity measure of the received transaction relative to a set of prior transactions in the data storage relating to the account, wherein the computed self-similarity score indicates similarity of the received transaction to other transactions of the account in the set of prior transactions; and [0008] determining the suggested action based on the computed fraud score and the computed self-similarity score.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman and Li in view of blocking of Huber to include similarity of Duke in order to control risk and customer experience ([0056], Duke);
With respect to claim 10, Li further teaches wherein the anomaly is identified at least in part due to the vector representation being normalized (Li ¶[0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model.).
With respect to claim 13 Li further teaches further comprising retraining the trained large language model based at least in part on the recording of the details, including transaction metadata, of the blocking of the transaction (Li ¶ [0033] A computing platform can obtain the fraudulent sample and the normal sample as described above, and each sample includes a user operation sequence and a corresponding time sequence. The computing platform trains the fraudulent transaction detection model based on the operation sequence and the time sequence. More specifically, the user operation sequence and the corresponding time sequence [metadata] are processed by using a convolutional neural network, to train the fraudulent transaction detection model, ¶ [0143] An implementation of fine-tuning process involves the use of fraudulent transaction information. In this case, the model is fine-tuned to recognize fraudulent transactions. Using known information of fraudulent transactions, i.e., labels. An input of the model is constructed as sequence of transactions made by a customer, and the fraudulent labels are used to mark each transaction as legit or frauds. The model is fine-tuned to classify each transaction in the input as fraudulent or no,¶ [0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model. The word embedding model is a model used in natural language processing (NLP), and is used to convert a single word into a vector. In the simplest model, a group of features are constructed for each word to serve as corresponding vectors. Further, to reflect the relationship between words, for example, a category relationship or a subordinate relationship, a language model can be trained in various methods, to optimize vector expression.)
With respect to claim 14 Abdelrahman teaches further comprising using the at least one trace of the transaction as a breach notification in a system of a third-party associated with the exchange (Abdelrahman ¶ [0178] In some embodiments, the reporting system 912 may provide an alert or notification to a user in response to the AI system 910 detecting an event (e.g., an anomalous transaction or a condition indicating that resources of the blockchain network are overloaded or underutilized.)
Claim(s) 11 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, Li, Huber, Duke in further view of Lutich (US 20200005333 A1).
With respect to claim 11, none of Abdelrahman, Li, Huber and Duke explicitly disclose however Lutich teaches wherein blocking the transaction further comprises calculating a cosine similarity between the at least one anomaly and the preexisting anomaly ([0042] In certain embodiments, measurements are used to determine whether invoices 128a-n are indicative of fraud. For example, fraud detector 166 may measure a similarity between combined position vector 148a and one or more sample vectors determined to be free from fraud using cosine similarity. A high cosine similarity may represent similarities between combined position vector 148a and the one or more sample vectors, which may indicate that invoice 148a is fraudless.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of vector of Li in view of blocking of Huber in view of similarity of Duke to include cosine similarity of Lutich in order to detect and prevent malicious activity ([0013], Lutich);
Claim(s) 12 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman, Li, Huber and Duke in further view of Shivakumar (US 20210026977 A1).
With respect to claim 12, none of Abdelrahman, Li, Huber and Duke explicitly disclose however Shivakumar teaches wherein blocking the transaction further comprises analyzing performance of the at least one vector representation on at least one of: a hallucination test, a bias test, a copyright test, a code generator test, a harmful content test, an offensive language test, a sensitive data element test, a license violation test or a combination thereof (Shivakumar ¶ [0051]The processor 104 may receive 538 the sensitive data 114 and may perform 540 an action with respect to the non-sensitive data 112 and the sensitive data 114, e.g., approve or deny a transaction requested by a user 124.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of vector of Li in view of blocking of Huber in view of similarity of Duke to include cosine similarity of Shivakumar in order to improve user experience ([0001], Shivakumar);
Claim(s) 15 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman in further view of Li, Duke, Huber.
With respect to claim 15 Abdelrahman teaches A non-transitory, computer-readable medium, comprising machine-readable instructions that, when executed by a processor of a computing device, cause the computing device to at least:
detect an exchange of data with a third-party large language model on a distributed ledger (¶ [0178] In some embodiments, the reporting system 912 may provide an alert or notification to a user in response to the AI system 910 detecting an event);
filter the data for at least one trace of a transaction associated with the third-party large language model (¶ [0176] In some embodiments, the AI system 910 may perform one or more of the following: anomaly detection to determine anomalies in transaction traffic … More generally, use cases to which the AI system 910 may be applied may involve traditional AI/ML models for classification, regression, clustering, pattern detection, anomaly detection, or also more advanced reinforcement learning settings for online network optimization, ¶[0257] FIG. 17 A machine learning framework 1700 is a collection of software and data that implements artificial intelligence trained to provide output, such as predictive data, based on input. Examples of artificial intelligence that can be implemented with machine learning way include neural networks (including recurrent neural networks), language models (including so-called “large language models”), generative models, natural language processing models, adversarial networks, decision trees);
Abdelrahman does not explicitly disclose however Li teaches transform the data into at least one normalized vector representation (Li ¶[0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model. The word embedding model is a model used in natural language processing (NLP), and is used to convert a single word into a vector. In the simplest model, a group of features are constructed for each word to serve as corresponding vectors. Further, to reflect the relationship between words, for example, a category relationship or a subordinate relationship, a language model can be trained in various methods, to optimize vector expression.);
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman to include vector of Li in order for more accurate and more comprehensive fraudulent transaction detection ([0073], Li);
None of Abdelrahman and Li explicitly disclose however Duke teaches compare the at least one normalized vector representation to at least one preexisting vector representation from a database of vector representations (Duke ¶[0007] computing a self-similarity score in response to a computed fraud score that is above a predetermined threshold, the self-similarity score comprising a similarity measure of the received transaction relative to a set of prior transactions in the data storage relating to the account, wherein the computed self-similarity score indicates similarity of the received transaction to other transactions of the account in the set of prior transactions; and [0008] determining the suggested action based on the computed fraud score and the computed self-similarity score.);
identify at least one anomaly based on a comparison of the at least one normalized vector representation to the at least one preexisting vector representation (Duke ¶[0007] computing a self-similarity score in response to a computed fraud score that is above a predetermined threshold, the self-similarity score comprising a similarity measure of the received transaction relative to a set of prior transactions in the data storage relating to the account, wherein the computed self-similarity score indicates similarity of the received transaction to other transactions of the account in the set of prior transactions; and [0008] determining the suggested action based on the computed fraud score and the computed self-similarity score.); and
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of vector of Li to include comparison of Duke in order to control risk and customer experience ([0056], Duke);
None of Abdelrahman, Li and Duke explicitly disclose however Huber teaches block the transaction based at least in part on the at least one anomaly (Huber ¶ [0088] As illustrated, the storage of the transaction records that include data representing metadata related to a denied transaction includes a variety of advantages).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of vector of Li in view of to include similarity of Duke to include blocking of Huber in order to provide security advantage ([0025], Huber);
With respect to claim 16 Li further teaches wherein a large language model is used to transform the data (Li ¶[0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model. The word embedding model is a model used in natural language processing (NLP), and is used to convert a single word into a vector. In the simplest model, a group of features are constructed for each word to serve as corresponding vectors. Further, to reflect the relationship between words, for example, a category relationship or a subordinate relationship, a language model can be trained in various methods)
With respect to claim 17 Li further teaches wherein the large language model is trained based at least in part on details, including transaction metadata, associated with the exchange of data (Li ¶ [0033] A computing platform can obtain the fraudulent sample and the normal sample as described above, and each sample includes a user operation sequence and a corresponding time sequence. The computing platform trains the fraudulent transaction detection model based on the operation sequence and the time sequence. More specifically, the user operation sequence and the corresponding time sequence [metadata] are processed by using a convolutional neural network, to train the fraudulent transaction detection model, ¶ [0143] An implementation of fine-tuning process involves the use of fraudulent transaction information. In this case, the model is fine-tuned to recognize fraudulent transactions. Using known information of fraudulent transactions, i.e., labels. An input of the model is constructed as sequence of transactions made by a customer, and the fraudulent labels are used to mark each transaction as legit or frauds. The model is fine-tuned to classify each transaction in the input as fraudulent or no,¶ [0051] In another implementation, the user operation sequence is processed as the operation matrix by using a word embedding model. The word embedding model is a model used in natural language processing (NLP), and is used to convert a single word into a vector. In the simplest model, a group of features are constructed for each word to serve as corresponding vectors. Further, to reflect the relationship between words, for example, a category relationship or a subordinate relationship, a language model can be trained in various methods, to optimize vector expression.)
With respect to claim 18 Duke further teaches wherein the database of vector representations includes at least one preexisting vector representation based at least in part on a stored record of a preexisting anomaly (Duke ¶[0007] computing a self-similarity score in response to a computed fraud score that is above a predetermined threshold, the self-similarity score comprising a similarity measure of the received transaction relative to a set of prior transactions in the data storage relating to the account, wherein the computed self-similarity score indicates similarity of the received transaction to other transactions of the account in the set of prior transactions; and [0008] determining the suggested action based on the computed fraud score and the computed self-similarity score.).
Claim(s) 19 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman Li, Duke, Huber in further view of Tsunoda (US 20050144295 A1)
With respect to claim 19, Abdelrahman Li, Duke, Huber do not explicitly disclose however, Tsunoda teaches wherein the database of vector representations includes at least one preexisting vector representation based at least in part on a stored record of a generated anomaly (Tsunoda ¶ [0258] Even if similarities and predicted vectors are stored beforehand, not all the stored data need to be recalculated.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of vector of Li in view of to include similarity of Duke in view of blocking of Huber to include database of Tsunoda in order to alleviate processing loads ([0285], Tsunoda);
Claim(s) 20 is (are) rejected under 35 U.S.C. 103 as being unpatentable over Abdelrahman Li, Duke, Huber in further view of Palyutina (US 20210092613 A1)
With respect to claim 20, Abdelrahman Li, Duke, Huber do not explicitly disclose however, Palyutina teaches wherein the details, including transaction metadata, associated with the exchange of data are recorded in a private node of the distributed ledger, the private node being made accessible only to verified peers (Palyutina ¶ [0040] The access network device 20 may carry out the method of FIG. 1 and establish and verify 110 the spatiotemporal information for the user information, in some embodiments in co-operation with or on the basis of signals from other access network device(s) 22, 24. Access network devices 20, 22, 24 may be connected to a private distributed network 60 and operate as nodes of the network 60. The user information may be stored in a private or permissioned distributed ledger stored at least in part of the nodes of the private network 60. The access network devices may also be configured to control access to the stored user information (in response to verification of the transfer transaction issued in block 230). The private network may be operated by telecom operators and/or authorities, for example.)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify distributed transaction of Abdelrahman in view of vector of Li in view of to include similarity of Duke in view of blocking of Huber to include private node of Palyutina in order to facilitate secure exchange of assets ([0034, Palyutina);
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATHAR N PASHA whose telephone number is (408)918-7675. The examiner can normally be reached Monday-Thursday Alternate Fridays, 7:30-4:30 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ATHAR N PASHA/Primary Examiner, Art Unit 2657