Prosecution Insights
Last updated: April 19, 2026
Application No. 18/082,870

SELLER RISK DETECTION BY PRODUCT COMMUNITY AND SUPPLY CHAIN MODELLING WITH ONLY TRANSACTION RECORDS

Final Rejection §101§103
Filed
Dec 16, 2022
Examiner
ALLEN, NICHOLAS E
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Paypal Inc.
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
93%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
585 granted / 760 resolved
+22.0% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
68 currently pending
Career history
828
Total Applications
across all art units

Statute-Specific Performance

§101
22.7%
-17.3% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
4.7%
-35.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 760 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In response to Applicant’s claims filed on July 14, 2025, claims 1-20 are now pending for examination in the application. Response to Arguments This office action is in response to amendment filed July 14, 2025. In this action claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Devarakonda et al. (US Pub. No. 20200210947) and Escalona et al. (US Pub. No. 20220164397) and Hertz et al. (US Pub. No. 20190354544) and Lollo et al. (US Pub. No. 20220366333) in further view of Jameson et al. (US Pub. No. 20210192522). The Jameson et al. reference has been added to address the amendment determining, by the processor, according to the entity-specific classification, that the first entity presents a high risk. Applicant’s arguments: In regards to claim 1 on Pages 13, applicant argues the cited art fails to clearly and unequivocally disclose a “The same rationale applies to the current independent claims. An entity is classified as recited in the claims. Based on that entity classification, the processor determines that the entity presents a high risk, automatically rejects a transaction involving that entity, and prevents future transactions by that entity. Thus, the claim recites both a corrective action (rejecting the transaction) and a proactive action (preventing future transactions) to improve the technical field of electronic transaction processing in view of entity risk. This benefit is explicitly described,” as alleged.” as alleged. Examiner’s Reply: Applicant argues that the amended claims comprises statutory subject matter. Examiner respectfully disagrees. The examiner notes that the computer as recited in the claims are being used for detecting risk using a transactional history (a computer being used as a generic tool). Identifying risks in a merchant environment does not improve the functioning of a computer. Therefore, the abstract idea recited in the claims is generally linking it to a computer environment, and does not integrate the abstract idea into a practical application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the 2019 Revised Patent Subject Matter Eligibility Guidance, hereinafter 2019 PEG. Step 1. in accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted the claim method (claims 1-9), a system (claim 10-16), and method (17-20) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1. Independent claim(s) 1 and 10 recites the following limitations directed towards a Mental Processes & Mathematical Concepts: generating, by a processor, a first data vector of a first entity based on a set of data records associated with representative activities of the first entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating a first vector data), wherein the set of data records comprise: product data associated with the first entity, and at least one relationship data representing an activities relationship between the first entity and a plurality of second entities, wherein the first data vector encodes the product data and the at least one relationship data of the first entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by encoding product data); generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections within a supply chain of the first entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating a second data); generating, by the processor, a combined data vector of the first entity by combining the first data vector and the second data vector (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating additional vector data); generating, by the processor, via a clustering space machine learning model, an entity vector representing the first entity based on the combined data vector (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an entity vector); and generating, by the processor, via a classification machine learning model, an entity-specific classification of the first entity based on the entity vector representing the first entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an entity specific classification); determining, by the processor, according to the entity-specific classification, that the first entity presents a high risk and (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by determining a high risk entity), in response: (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by rejecting a transaction); transactions (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by determining a high risk entity). Step 2A. In accordance with Step 2A, prong two of the 2019 PEG, the judicial exception is not integrated into a practical application because of the recitation in claim(s) 10: a processor (i.e., as a generic processor/component performing a generic computer function); and a non-transitory computer-readable medium (i.e., as a generic processor/component performing a generic computer function) having stored thereon instructions that are executable by the processor to cause the system to perform operations comprising: a clustering space machine learning model, classification machine learning model, (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an entity specific classification) (machine learning models merely automate the claimed steps and are no more than mere instructions to apply the exception using generic computer components); automatically rejecting automatically preventing The claim as a whole merely describes how to generally “apply” the exception in a computer environment. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. Step 2B. Similar to the analysis under 2A Prong Two, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Because the additional elements of the independent claims amount to insignificant extra solution activity and/or mere instructions, the additional elements do not add significantly more to the judicial exception such that the independent claims as a whole would be patent eligible. Independent claim(s) 17 recites the following limitations directed towards a Mental Processes: generating, by a processor, a first data vector of a first entity based on a set of data records associated with representative activities of the first entity, wherein the set of data records comprise: product data associated with the first entity, and at least one relationship data representing an activities relationship between the first entity and a plurality of second entities, wherein the first data vector encodes the product data and the at least one relationship data of the first entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating vector data); generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections between the plurality of second entities and the first entity, wherein the plurality of topological connections define a shape that represents a plurality of relationships between the first entity and at least one of the plurality of second entities within a supply chain of the first entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating vector data); generating, by the processor, a combined data vector of the first entity by combining the first data vector and the second data vector (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating additional vector data); generating, by the processor, via a clustering space machine learning model, an entity vector representing the first entity based on the combined data vector (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating additional vector data); and generating a prediction that the first entity exhibits a characteristic by inputting the combined data vector into a multi-dimensional clustering space machine learning model (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating a prediction); wherein the prediction is generated based at least in part on a determination that the shape represented by the plurality of topological connections is similar to a reference shape represented by a plurality of topological connections of one of the plurality of second entities that has been identified as having the characteristic (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating a prediction). Therefore, independent claims 1, 10, and 17 are rejected under 35 U.S.C. 101. With respect to claim(s) 2: Step 2A, prong one of the 2019 PEG: training, by a processor, the clustering space machine learning model to generate the entity vector based at least in part on a training data set comprising a set of entities and a set of classification labels associated with the set of entities (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating additional vector data). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 3, 12, and 19: Step 2A, prong one of the 2019 PEG: training, by the processor, the clustering space machine learning model to generate a multi-dimensional clustering space comprising the entity vector by: generating defining, by the processor, a training data set comprising a plurality of training entities, each training entity having a first data vector based on a set of data records associated with the training entity and a second data vector based on a data representation of a plurality of topological connections between a plurality of second training entities and the training entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating training data); minimizing, by the processor, a loss function, the loss function comprising: a mathematical combination of a first data vector of a first training entity and a first data vector of a second training entity (The limitation recites a mathematical concept; calculating); and a mathematical combination of the second data vector of the first training entity and a predicted second data vector of the first training entity (The limitation recites a mathematical concept; calculating). Step 2A Prong Two Analysis: inputting, by the processor, the training data set into the machine learning model (recites insignificant extrasolution activity of data gathering). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 4 and 13: Step 2A, prong one of the 2019 PEG: the product data comprises at least one product description, wherein the at least one product description is tokenized and aggregated to form at least one word vector, wherein the at least one word vector is combined to generate a product information vector (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating additional vector data). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 5 and 14: Step 2A, prong one of the 2019 PEG: the first entity is a merchant (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating additional vector data); the plurality of second entities comprises a plurality of other merchants, buyers or a combination thereof, wherein the data representation of the plurality of topological connections between the merchant and the plurality of other merchants, buyers or a combination thereof is extracted using a Power-Law degree distribution (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating additional vector data). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 6 and 15: Step 2A, prong one of the 2019 PEG: tokenizing a set of item descriptions associated with the first entity to generate a sequence of word tokens (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by tokenizing for vector data); aggregating the sequence of word tokens into a single activity document (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by aggregating for vector data); ranking the word tokens to extract a set of representative words from the set of item descriptions (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by ranking for vector data). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 7: Step 2A, prong one of the 2019 PEG: Wherein combining the first data vector and the second data vector comprises concatenating the first data vector of the first entity and the second data vector of the first entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by concatenating for vector data). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 8: Step 2A, prong one of the 2019 PEG: Examiner is of the position the dependent claim is directed toward additional elements. Step 2A Prong Two Analysis: transmitting, by the processor, a notification to the first entity, wherein the notification is related to the entity-specific classification (recites insignificant extra-solution activity of data sending a notification). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 9: Step 2A, prong one of the 2019 PEG: removing, by the processor, the first entity from an activity platform (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by removing an entity). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 11 and 18: Step 2A, prong one of the 2019 PEG: training the clustering space machine learning model to generate the multi-dimensional clustering space based at least in part on a training data set comprising a set of entities and a set of classification labels associated with the set of entities (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating vector data). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 16 and 20: Step 2A, prong one of the 2019 PEG: reject a transaction involving the first entity; remove the first entity from an activity platform, or prevent the first entity from engaging in future activities (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by classifying data). Step 2A Prong Two Analysis: transmit a notification to the first entity, wherein the notification is related to the entity-specific classification (recites insignificant extrasolution activity of data sending a notification). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Devarakonda et al. (US Pub. No. 20200210947) and Escalona et al. (US Pub. No. 20220164397) and Hertz et al. (US Pub. No. 20190354544) and Lollo et al. (US Pub. No. 20220366333) in further view of Jameson et al. (US Pub. No. 20210192522). With respect to claim 1, Devarakonda et al. teaches a method comprising: generating, by a processor, a first data vector of a first entity based on a set of data records associated with representative activities of the first entity (Paragraph 85 discloses vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations), wherein the set of data records comprise: product data associated with the first entity, and at least one relationship data representing an activities relationship between the first entity and a plurality of second entities, wherein the first data vector encodes the product data and the at least one relationship data of the first entity (Paragraph 83 discloses The supply chain data 306 may consist of continuous signals in time, encoded entities (e.g., product codes, locations, network node numbers), textual information (e.g., the names of products, categories, users, customers), and documents containing unstructured data (e.g., schedules, agreements, marketing campaigns etc.)). Devarakonda et al. does not disclose generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections between the plurality of second entities and the first entity, wherein the plurality of topological connections define a shape that represents a plurality of relationships between the first entity and at least one of the plurality of second entities within a supply chain of the first entity. However, Escalona et al. teaches generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections within a supply chain of the first entity (Paragraph 6 discloses the one or more ML models may output feature vectors (e.g., triples) that include entities, risks, and relationships extracted from the input documents and Paragraph 7 discloses a variety of industry sectors, such as healthcare providers, law firms, supply chains, non-profit agencies, government agencies, or the like, for use in monitoring compliance with government regulations or industry sector rules). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. with Escalona et al. This would have facilitated identifying supply relationships for risk detection by using vectors in the analysis of transactions. See Escalona et al. Paragraph(s) 5-11. Devarakonda et al. as modified by Escalona et al. does not explicitly disclose generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections between the plurality of second entities and the first entity, wherein the plurality of topological connections define a shape that represents a plurality of relationships between the first entity and at least one of the plurality of second entities within a supply chain of the first entity. However, Hertz et al. teaches generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections between the plurality of second entities and the first entity, wherein the plurality of topological connections define a shape that represents a plurality of relationships between the first entity and at least one of the plurality of second entities within a supply chain of the first entity (Paragraph 37 discloses a data shape). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. and Escalona et al. with Hertz et al. This would have facilitated identifying supply relationships for risk detection by using shapes in the analysis of transactions. See Hertz et al. Paragraph(s) 2-22. Devarakonda et al. as modified by Escalona et al. and Hertz does not disclose generating, by the processor, a combined data vector of the first entity by combining the first data vector and the second data vector. However, Lollo et al. teaches generating, by the processor, a combined data vector of the first entity by combining the first data vector and the second data vector (Paragraph 40 discloses super-categories can be created from different categorical vectors and need not be exclusive); generating, by the processor, via a clustering space machine learning model, an entity vector representing the first entity based on the combined data vector (Paragraph 38 discloses assessment system 102 may determine the impact categories using any of the machine learning models described below in relation to FIG. 2 (e.g., through dimension reduction techniques like principal components analysis, clustering algorithms, and classification techniques)); and generating, by the processor, via a classification machine learning model, an entity-specific classification of the first entity based on the entity vector representing the first entity (Paragraph 38 discloses assessment system 102 may determine the impact categories using any of the machine learning models described below in relation to FIG. 2 (e.g., through dimension reduction techniques like principal components analysis, clustering algorithms, and classification techniques)). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. and Escalona et al. and Hertz et al. with Lollo et al. This would have facilitated identifying supply relationships for risk detection by using vectors in the analysis of transactions. See Lollo et al. Paragraph(s) 3-6. Devarakonda et al. as modified by Escalona et al. and Hertz et al. and Lollo et al. does not disclose determining, by the processor, according to the entity-specific classification, that the first entity presents a high risk. However, Jameson et al. teaches determining, by the processor, according to the entity-specific classification, that the first entity presents a high risk (Paragraph 78 discloses analysis can determine, from data of past transactions, a low risk authorization score threshold for future transactions, and a high risk authorization score threshold for future transactions) and, in response: automatically rejecting, by the processor, a transaction involving the first entity (Paragraph 78 discloses for each of a plurality of future transactions, the implementation: (i) compares the transaction with the implemented set of intelligent fraud rules in the real time decisioning processor; (ii) determined from the comparison whether the transaction should be declined/approved/case created; and (iii) transmits a decline message when the transaction should be declined); automatically preventing, by the processor, the first entity from engaging in future transactions (Paragraph 78 discloses business rules, so optimized, are then implemented in a real time decisioning processor. Thereafter, for each of a plurality of future transactions, the implementation: (i) compares the transaction with the implemented set of intelligent fraud rules in the real time decisioning processor; (ii) determined from the comparison whether the transaction should be declined/approved/case created). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. and Escalona et al. and Hertz et al. and Lollo et al. with Jameson et al. This would have facilitated identifying supply relationships for risk detection by using vectors in the analysis of transactions. See Jameson et al. Paragraph(s) 1-3. The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 2, Devarakonda et al. teaches the method of claim 1, further comprising: training, by a processor, the clustering space machine learning model to generate the entity vector based at least in part on a training data set comprising a set of entities and a set of classification labels associated with the set of entities (Paragraph 77 discloses the supply chain training data 304 and the identified features 302, the machine-learning application 218 (e.g., model) is trained by machine-learning application trainer 224). The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 3, Devarakonda et al. teaches the method of claim 1, further comprising: training, by the processor, the clustering space machine learning model to generate a multi-dimensional clustering space (Paragraph 106 discloses clustering algorithms to manifest a clustering of the information) comprising the entity vector by: defining a training data set comprising a plurality of training entities, each training entity having a first data vector based on a set of data records associated with the training entity and a second data vector based on a data representation of a plurality of topological connections between a plurality of second training entities and the training entity (Paragraph 6 discloses The machine-learning application, previously trained with supply chain training data, receives supply chain data); inputting, by the processor, the training data set into the machine learning model (Paragraph 6 discloses The machine-learning application, previously trained with supply chain training data, receives supply chain data); and minimizing, by the processor, a loss function, the loss function comprising: a mathematical combination of a first data vector of a first training entity and a first data vector of a second training entity (Paragraph 85 discloses the machine-learning application 218 modifies the features 302 (e.g., heterogeneous features) using mathematical, statistical, information theoretic transformations. Examples of the mathematical, statistical, and information theoretic transformations include log, exponent, sine or cosine moments of a feature distribution, mutual information score across multiple features, vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations); and a mathematical combination of the second data vector of the first training entity and a predicted second data vector of the first training entity (Paragraph 85 discloses the machine-learning application 218 modifies the features 302 (e.g., heterogeneous features) using mathematical, statistical, information theoretic transformations. Examples of the mathematical, statistical, and information theoretic transformations include log, exponent, sine or cosine moments of a feature distribution, mutual information score across multiple features, vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations). The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 4, Devarakonda et al. teaches the method of claim 1, wherein: the product data comprises at least one product description, wherein the at least one product description is tokenized and aggregated to form at least one word vector (Paragraph 85 discloses vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations and Paragraph 83 discloses the supply chain data 306 may consist of continuous signals in time, encoded entities (e.g., product codes, locations, network node numbers), textual information (e.g., the names of products, categories, users, customers), and documents containing unstructured data (e.g., schedules, agreements, marketing campaigns etc.), wherein the at least one word vector is combined to generate a product information vector (Paragraph 85 discloses vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations). The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 5, Devarakonda et al. teaches the method of claim 1, wherein: the first entity is a merchant (Paragraph 83 discloses The input data model encapsulates relationships and links disparate information together in time, both as entities and as a hierarchy that reflects an actual supply chain execution sequence of the supply chain 100. The supply chain data 306 may consist of continuous signals in time, encoded entities (e.g., product codes, locations, network node numbers), textual information (e.g., the names of products, categories, users, customers), and documents containing unstructured data (e.g., schedules, agreements, marketing campaigns etc.)); the plurality of second entities comprises a plurality of other merchants, buyers or a combination thereof (Paragraph 83 discloses The input data model encapsulates relationships and links disparate information together in time, both as entities and as a hierarchy that reflects an actual supply chain execution sequence of the supply chain 100. The supply chain data 306 may consist of continuous signals in time, encoded entities (e.g., product codes, locations, network node numbers), textual information (e.g., the names of products, categories, users, customers), and documents containing unstructured data (e.g., schedules, agreements, marketing campaigns etc.)), wherein the data representation of the plurality of topological connections between the merchant and the plurality of other merchants, buyers or a combination thereof is extracted using a Power-Law degree distribution (Paragraph 85 discloses the machine-learning application 218 modifies the features 302 (e.g., heterogeneous features) using mathematical, statistical, information theoretic transformations. Examples of the mathematical, statistical, and information theoretic transformations include log, exponent, sine or cosine moments of a feature distribution, mutual information score across multiple features, vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations). The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 6, Devarakonda et al. teaches the method of claim 1, further comprising generating the first data vector of the first entity by: tokenizing a set of item descriptions associated with the first entity to generate a sequence of word tokens (Paragraph 85 discloses vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations); aggregating the sequence of word tokens into a single activity document (Paragraph 85 discloses vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations); ranking the word tokens to extract a set of representative words from the set of item descriptions (Paragraph 88 discloses the machine-learning application 218 identifies, scores, and ranks strong causal attributes that are inferred to relate significantly to the objective of interest). The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 7, Devarakonda et al. teaches the method of claim 1, wherein combining the first data vector and the second data vector comprises concatenating the first data vector of the first entity and the second data vector of the first entity (Paragraph 88 discloses Representations generated at operation 506 are concatenated with the transformed heterogeneous features developed at operation 502 and with the ranked causal factors generated at operation 504 to provide synthetized and better organized information, in addition to causal attribution). The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 8, Devarakonda et al. teaches the method of claim 1, further comprising, according to the entity-specific classification of the first entity, one or more of: transmitting, by the processor, a notification to the first entity, wherein the notification is related to the entity-specific classification (Paragraph 106 discloses an unsupervised classification is performed. For example, the machine-learning application trainer 224 and/or access applications 228 and/or data analyst may utilize the concatenation described in FIG. 7 and clustering algorithms to manifest a clustering of the information. At operation 804, class segmentation labels are generated based on the clustering of the information). The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. teaches all the limitations of claim 1. With respect to claim 9, Devarakonda et al. teaches the method of claim 1, further comprising, according to the entity-specific classification of the first entity, one or more of: removing, by the processor, the first entity from an activity platform (Paragraph 176 discloses a remove request 1554). With respect to claim 10, Devarakonda et al. teaches a system for configuring a search engine to classify a search query, the system comprising: a processor (Fig. 15 discloses a processor); and a non-transitory computer-readable medium (Fig. 15 discloses a medium) having stored thereon instructions that are executable by the processor to cause the system to perform operations comprising: generating a first data vector of a first entity based on a set of data records associated with representative activities of the first entity (Paragraph 85 discloses vectorization of word sequences or N-grams of word sequences and/or encoded representations of data using compression techniques or other mathematical transformations), wherein the set of data records comprise: product data associated with the first entity, and at least one relationship data representing an activities relationship between the first entity and a plurality of second entities, wherein the first data vector encodes the product data and the at least one relationship data of the first entity (Paragraph 83 discloses The supply chain data 306 may consist of continuous signals in time, encoded entities (e.g., product codes, locations, network node numbers), textual information (e.g., the names of products, categories, users, customers), and documents containing unstructured data (e.g., schedules, agreements, marketing campaigns etc.)). Devarakonda et al. does not disclose generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections between the plurality of second entities and the first entity, wherein the plurality of topological connections define a shape that represents a plurality of relationships between the first entity and at least one of the plurality of second entities within a supply chain of the first entity. However, Escalona et al. teaches generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections within a supply chain of the first entity (Paragraph 6 discloses the one or more ML models may output feature vectors (Paragraph 6 discloses the one or more ML models may output feature vectors (e.g., triples) that include entities, risks, and relationships extracted from the input documents and Paragraph 7 discloses a variety of industry sectors, such as healthcare providers, law firms, supply chains, non-profit agencies, government agencies, or the like, for use in monitoring compliance with government regulations or industry sector rules). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. with Escalona et al. This would have facilitated identifying supply relationships for risk detection by using vectors in the analysis of transactions. See Escalona et al. Paragraph(s) 5-11. Devarakonda et al. as modified by Escalona et al. does not explicitly disclose generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections between the plurality of second entities and the first entity, wherein the plurality of topological connections define a shape that represents a plurality of relationships between the first entity and at least one of the plurality of second entities within a supply chain of the first entity. However, Hertz et al. teaches generating, by the processor, a second data vector of the first entity encoding a data representation of a plurality of topological connections between the plurality of second entities and the first entity, wherein the plurality of topological connections define a shape that represents a plurality of relationships between the first entity and at least one of the plurality of second entities within a supply chain of the first entity (Paragraph 37 discloses a data shape). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. and Escalona et al. with Hertz et al. This would have facilitated identifying supply relationships for risk detection by using shapes in the analysis of transactions. See Hertz et al. Paragraph(s) 2-22. Devarakonda et al. as modified by Escalona et al. and Hertz does not disclose generating, by the processor, a combined data vector of the first entity by combining the first data vector and the second data vector. However, Lollo et al. teaches generating, by the processor, a combined data vector of the first entity by combining the first data vector and the second data vector (Paragraph 40 discloses super-categories can be created from different categorical vectors and need not be exclusive); generating, by the processor, via a clustering space machine learning model, an entity vector representing the first entity based on the combined data vector (Paragraph 38 discloses assessment system 102 may determine the impact categories using any of the machine learning models described below in relation to FIG. 2 (e.g., through dimension reduction techniques like principal components analysis, clustering algorithms, and classification techniques)); and generating, by the processor, via a classification machine learning model, an entity-specific classification of the first entity based on the entity vector representing the first entity (Paragraph 38 discloses assessment system 102 may determine the impact categories using any of the machine learning models described below in relation to FIG. 2 (e.g., through dimension reduction techniques like principal components analysis, clustering algorithms, and classification techniques)). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. and Escalona et al. and Hertz et al. with Lollo et al. This would have facilitated identifying supply relationships for risk detection by using vectorss in the analysis of transactions. See Lollo et al. Paragraph(s) 3-6. Devarakonda et al. as modified by Escalona et al. and Hertz et al. and Lollo et al. does not disclose determining, by the processor, according to the entity-specific classification, that the first entity presents a high risk. However, Jameson et al. teaches determining, by the processor, according to the entity-specific classification, that the first entity presents a high risk (Paragraph 78 discloses analysis can determine, from data of past transactions, a low risk authorization score threshold for future transactions, and a high risk authorization score threshold for future transactions) and, in response: automatically rejecting, by the processor, a transaction involving the first entity (Paragraph 78 discloses for each of a plurality of future transactions, the implementation: (i) compares the transaction with the implemented set of intelligent fraud rules in the real time decisioning processor; (ii) determined from the comparison whether the transaction should be declined/approved/case created; and (iii) transmits a decline message when the transaction should be declined); automatically preventing, by the processor, the first entity from engaging in future transactions (Paragraph 78 discloses business rules, so optimized, are then implemented in a real time decisioning processor. Thereafter, for each of a plurality of future transactions, the implementation: (i) compares the transaction with the implemented set of intelligent fraud rules in the real time decisioning processor; (ii) determined from the comparison whether the transaction should be declined/approved/case created). Therefore, it would have been obvious before the effective filing date of invention was made to a person having ordinary skill in the art to modify Devarakonda et al. and Escalona et al. and Hertz et al. and Lollo et al. with Jameson et al. This would have facilitated identifying supply relationships for risk detection by using vectors in the analysis of transactions. See Jameson et al. Paragraph(s) 1-3. The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. and Jameson et al. teaches all the limitations of claim 10. With respect to claim 11, Devarakonda et al. teaches the system of claim 10, wherein the non-transitory computer-readable medium stores further instructions that, when executed by the processor, cause the system to perform further operations comprising: training the clustering space machine learning model to generate the multi- dimensional clustering space based at least in part on a training data set comprising a set of entities and a set of classification labels associated with the set of entities (Paragraph 106 discloses clustering algorithms to manifest a clustering of the information. At operation 804, class segmentation labels are generated based on the clustering of the information. For examples, the class segmentation labels may be used to identify products produced by the supply chain 100 or nodes in the supply chain 100). With respect to claim 12, it is rejected on grounds corresponding to above rejected claim 3, because claim 12 is substantially equivalent to claim 3. With respect to claim 13, it is rejected on grounds corresponding to above rejected claim 4, because claim 13 is substantially equivalent to claim 4. With respect to claim 14, it is rejected on grounds corresponding to above rejected claim 5, because claim 14 is substantially equivalent to claim 5. With respect to claim 15, it is rejected on grounds corresponding to above rejected claim 6, because claim 15 is substantially equivalent to claim 6. The Devarakonda et al. reference as modified by Escalona et al. and Hertz et al. and Lollo et al. and Jameson et al. teaches all the limitations of claim 10. With respect to claim 16, Devarakonda et al. teaches the system of claim 10, wherein the non-transitory computer-readable medium stores further instructions that, when executed by the processor, cause the system to perform further operations comprising, according to the entity-specific classification of the first entity, one or more of: transmit a notification to the first entity, wherein the notification is related to the entity-specific classification (Paragraph 106 discloses an unsupervised classification is performed. For example, the machine-learning application trainer 224 and/or access applications 228 and/or data analyst may utilize the concatenation described in FIG. 7 and clustering algorithms to manifest a clustering of the information. At operation 804, class segmentation labels are generated based on the clustering of the information); reject a transaction involving the first entity; remove the first entity from
Read full office action

Prosecution Timeline

Dec 16, 2022
Application Filed
Dec 27, 2023
Non-Final Rejection — §101, §103
Feb 28, 2024
Interview Requested
Mar 18, 2024
Applicant Interview (Telephonic)
Apr 09, 2024
Response Filed
Jun 03, 2024
Examiner Interview Summary
Jul 15, 2024
Final Rejection — §101, §103
Oct 01, 2024
Interview Requested
Oct 24, 2024
Response after Non-Final Action
Nov 13, 2024
Examiner Interview (Telephonic)
Nov 14, 2024
Response after Non-Final Action
Dec 11, 2024
Request for Continued Examination
Dec 17, 2024
Response after Non-Final Action
Jan 07, 2025
Non-Final Rejection — §101, §103
Mar 13, 2025
Interview Requested
Jul 14, 2025
Response Filed
Oct 21, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12380068
RECENT FILE SYNCHRONIZATION AND AGGREGATION METHODS AND SYSTEMS
2y 5m to grant Granted Aug 05, 2025
Patent 12339822
METHOD AND SYSTEM FOR MIGRATING CONTENT BETWEEN ENTERPRISE CONTENT MANAGEMENT SYSTEMS
2y 5m to grant Granted Jun 24, 2025
Patent 12321704
COMPOSITE EXTRACTION SYSTEMS AND METHODS FOR ARTIFICIAL INTELLIGENCE PLATFORM
2y 5m to grant Granted Jun 03, 2025
Patent 12271379
CROSS-DATABASE JOIN QUERY
2y 5m to grant Granted Apr 08, 2025
Patent 12259876
SYSTEM AND METHOD FOR A HYBRID CONTRACT EXECUTION ENVIRONMENT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
93%
With Interview (+16.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 760 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month