DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following is a Final Office Action in response to communications received on 12/30/2025. Claims 1-20 are currently pending and have been examined. Claims 1-8 and 15 have been amended.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: The claims 1-7 are a computer readable medium, claims 8-14 are a method and claims 15-20 are a system. Thus, each independent claim, on its face, is directed to one of the statutory categories of 35 U.S.C. §101. However, the claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1: The independent claims (1, 8 and 15, taking claim 1 as a representative claim) recite:
One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations, the operations comprising:
receiving a signature at various checkpoints in a user workflow, the signature comprising times at which particular activities of a user took place;
storing the signature in a knowledge graph, wherein the knowledge graph comprises activities as vertices and qualifiers for activities as adjacency relations with other activities;
converting, without human intervention, a search query for a combination of signatures within a particular timeframe indicative of fraud into graph traversal logic of the knowledge graph;
traversing, at runtime of the user workflow, the knowledge graph utilizing the graph traversal logic, by testing adjacency relations within the particular timeframe;
and based on the traversing, dynamically modifying the user workflow to prevent fraudulent activity.
These limitations, except for the italicized portions, under their broadest reasonable interpretations, recite certain methods of organizing human activity for managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as well as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claimed invention recites steps for preventing fraud in online transactions (see [0002 of the instant specification) through the analysis of user activities being converted into a knowledge graph, searching for fraudulent activities, and modifying workflow based on the findings. The steps under its broadest reasonable interpretation specifically fall under sales activities. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of
One or more non-transitory computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations, the operations comprising: (claim 1)
A computer-implemented method comprising: (claim 8)
A computer system comprising: one or more processors; and one or more computer storage medium storing computer-usable instructions that, when used by the one or more processors, causes the computer system to perform operations comprising: (claim 15)
The additional elements of One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations, the operations comprising: (claim 1); A computer-implemented method comprising: (claim 8); A computer system comprising: one or more processors; and one or more computer storage medium storing computer-usable instructions that, when used by the one or more processors, causes the computer system to perform operations comprising: (claim 15) are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application – MPEP 2106.05(f).
Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A Prong two, the additional elements in the claims amount to no more than mere instructions to apply the judicial exception using a generic computer component.
Even when considered as an ordered combination, the additional elements of claim 1, 8, and 15 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1, 8, and 15 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
As such, independent claims 1, 8, and 15 are ineligible.
Dependent claims 2-7, 9-14, and 16-20 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claims 1, 8 and 15 without significantly more.
Claim 2 recites further comprising, receiving, the signatures of risk indicators from a knowledge graph storing time series data corresponding to fraudulent users. The limitation merely further limits the abstract idea and does not recite significantly more to integrate the judicial exception into a practical application.
Claim 3 recites further comprising, receiving, at a user interface, a search query for signatures of risk indicators, wherein the signatures of risk indicators comprise a combination of signatures. The limitation merely further limits the abstract idea and recited the additional element of a user interface. However, the interface is recited at a high level of generality and does not recite significantly more to integrate the judicial exception into a practical application.
Claim 4 recites further comprising, aggregating a geolocation of a user device corresponding to the user workflow, a device type of the user device, and browser or application information corresponding to the user workflow. The limitation merely further limits the abstract idea and does not recite significantly more to integrate the judicial exception into a practical application.
Claim 5 recites further comprising, enabling, at a user interface, a business user to dynamically specify the checkpoint in the user workflow. The limitation merely further limits the abstract idea and recited the additional element of a user interface. However, the interface is recited at a high level of generality and does not recite significantly more to integrate the judicial exception into a practical application.
Claim 6 recites further comprising, enabling, at a user interface, a business user to dynamically search for the combination of signatures leading up to the checkpoint in the user workflow. The limitation merely further limits the abstract idea and recited the additional element of a user interface. However, the interface is recited at a high level of generality and does not recite significantly more to integrate the judicial exception into a practical application.
Claim 7 recites wherein the checkpoint comprises: user registration, user sign in, change in shipping address, change in email address, change in user device, or checkout. The limitation merely further limits the abstract idea and does not recite significantly more to integrate the judicial exception into a practical application.
Claims 9-14 and 16-20 recite parallel claim language and therefore are also rejected for the reasons set forth above. For these reasons claims 1-20 are rejected under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 7, 8, 9, 14, 15, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bitton (US 20210400064) in view of Thomas (US 20250278644) in further view of Wu (US 20240169214).
Regarding claims 1, 8, and 15, Bitton discloses:
One or more non-transitory computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations, the operations comprising: (claim 1) (element 100 Figure 1)
A computer-implemented method comprising: (claim 8) (element 100 Figure 1)
A computer system comprising: one or more processors; and one or more computer storage medium storing computer-usable instructions that, when used by the one or more processors, causes the computer system to perform operations comprising: (claim 15) (element 100 Figure 1)
receiving a signature at various checkpoints in a user workflow, the signature comprising times at which particular activities of a user took place; (110 Labeled user flow data of multiple browsing sessions in Figure 1 0035] First, labeled user flow data 110 (of FIG. 1) are obtained, for example, by extracting information from one or more log files of an HTTP (HyperText Transfer Protocol) server hosting a particular website, representing that information as user flow data, and labeling each browsing session in the user flow data as legitimate or fraudulent. And see [0040-45 for time series data example of the user session)
storing the signature in a […] graph; (200 Construct a directed graph for each of the sessions in Figure 2)
Traversing (differentiate between graph feature values), at runtime of the user workflow, the […] graph […]; [0024] Advantageously, the machine learning classifier is trained to differentiate between graph feature values that are typically associated with browsing activity of legitimate users, and graph feature values typically associated with browsing activity of fraudulent users. [0091] In step 406, the classifier may be applied to the features computed in step 402 (and optionally also those computed in step 404), in order to infer a classification of the session in question as legitimate or fraudulent 312 (also of FIG. 3). Block 312 may also serve as a decision block, triggering further action if the session in question has been classified as fraudulent.
and based on the traversing, dynamically modifying the user workflow to prevent fraudulent activity. [0091] For example, in step 408, a fraudulent browsing session may be terminated to prevent additional damage, and/or be reported to responsible personnel who can take actions to mitigate the fraudulent activity either in real time or after the fact.
While Bitton discloses the construction and analysis of a directed graph in order to make determinations about fraudulent activity of user website browsing sessions, the reference does not expressly disclose:
a knowledge graph
wherein the knowledge graph comprises activities as vertices and qualifiers for activities as adjacency relations with other activities;
converting, without human intervention, a search query for a combination of signatures withing a particular timeframe indicative of fraud into graph traversal logic of the knowledge graph;
the knowledge graph utilizing the graph traversal logic, by testing adjacency relations within the particular timeframe;
However Thomas teaches:
A knowledge graph [0076] As the TSAgents perform these tasks on an iterative basis over time, these derived relationships (including DDRs) are reflected in one or more iteratively updated Knowledge Graphs. As a result, the Knowledge Graphs represent, at any given point in time, the underlying relationships from which a current narrative or story behind the transactions, documents and other related information previously processed by the system can be generated. And see [0117]
converting, without human intervention, a search query for a combination of signatures, ([0280] The AP clerk in this scenario simply submits the invoice to system 200 along with a natural-language request to process that invoice. As system 200 proceeds to capture and classify the invoice (displaying an animation illustrating that process), it provides almost immediate results, including the amount of the invoice and vendor name, as well as key summary information. And see [0281]) within a particular timeframe, [0076] As the TSAgents perform these tasks on an iterative basis over time, these derived relationships (including DDRs) are reflected in one or more iteratively updated Knowledge Graphs. As a result, the Knowledge Graphs represent, at any given point in time, the underlying relationships from which a current narrative or story behind the transactions, documents and other related information previously processed by the system can be generated. And see [0117] indicative of fraud into graph traversal logic of the knowledge graph;[0081] In one embodiment, a “Narrative Generator” is employed by the Scenario Controller to traverse the Knowledge Graphs to facilitate the determination of the “next step” in the workflow specific to a customer's scenario (e.g., taking an action relating to an upcoming decision, processing additional documents, re-processing documents to complete a particular task based on updated information in the Knowledge Graphs, traversing the Knowledge Graphs to respond to internally or externally generated natural language queries, etc.). [0145] At various points in time, Scenario Controller 102 employs Narrative Generator 150 to traverse the current state of Knowledge Graph 125 to perform particular tasks. In one embodiment, at the end of each iteration, Narrative Generator 150 traverses Knowledge Graph 125 to generate a summary or “narrative story” of the state of the relevant transactions at that point in time. This narrative is iteratively regenerated over time as more and more information is processed by system 100.
the knowledge graph utilizing the graph traversal logic; [0183] As noted above, in one embodiment, Interrogation Engine 290 employs trained LLM models to interpret user queries, with the additional capability of traversing the current state of Knowledge Graph 225 and interpreting them in the context of various relationships, including DDRs. Moreover, it also utilizes such context to determine how best to address user queries. [0158] Similarly, users submit natural language queries to system 100 via Interrogation Engine 190. LLMs facilitate the interpretation of such queries and the formulation of responses by Interrogation Engine 190. Moreover, standard LLMs are enhanced to enable Interrogation Engine 190 to traverse Knowledge Graph 125 and utilize DDRs and other derived relationships to analyze queries and facilitate intelligent responses. And see [0202,0203, 0302]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the analysis of the constructed graph for fraudulent behavior to of Bitton to include a knowledge graph, converting, without human intervention, a search query for a combination of signatures indicative of fraud into graph traversal logic of the knowledge graph; the knowledge graph utilizing the graph traversal logic, as taught in Thomas, in order to automate processes that are currently relegated to human judgment and intervention due to the difficulty of deriving meaning from context (paragraph 0006).
While Bitton in view Thomas teaches the analysis of the constructed graph for fraudulent behavior and construction of a knowledge graph using time series data over an iterative time set, the combination does not expressly disclose:
wherein the knowledge graph comprises activities as vertices and qualifiers for activities as adjacency relations with other activities;
traversal logic, by testing adjacency relations within the particular timeframe;
However Wu teaches:
wherein the knowledge graph comprises activities as vertices and qualifiers for activities as adjacency relations with other activities; [0083] Specifically, when each technical time node is constructed, a unique entity id and entity status table may be generated, the entity status stores the event information of each technical event entity at different times; 0092] In an embodiment of the present disclosure, the server may regard the historical event information of the technical-event node before the current time as a model inputting, so that the target technical-timing prediction model outputs the event information of the technical-event node at the next time after the current time. Then, the event information of the technical-event node of the technical knowledge graph is updated according to the event information obtained by prediction.
traversal logic, by testing adjacency relations within the particular timeframe; [0101] Step 201, determining an adjacent-event node in the technical knowledge graph adjacent to the technical-event node; [0102] In an embodiment of the present disclosure, the adjacent-event node refers to an event node with an adjacent relationship with the technical-event node in the technical knowledge graph. It may be understood that there is a certain relationship between the event information of the technical-event node and the adjacent-event node connected to it when the technical information changes, therefore, the feature vector of the technical-event node may be represented by introducing the historical event information of the adjacent-event node, so that the model prediction may consider the influence of the event updating to the technical-event node by the adjacent-event node.[0103] Step 202, converging the technical-event node and the historical event information of the adjacent-event node, to obtain a converging feature vector; and [0104] In an embodiment of the present disclosure, due to a certain redundant content may exist in the event information between the adjacent-event node and the technical-event node, therefore, converging the historical event information of the technical-event node and the adjacent-event node by a converging processing way, so that when the obtained converging feature vector is simplified as much as possible, it may still give consideration to the historical event information of the technical-event node and the adjacent-event node.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the analysis of the constructed graph for fraudulent behavior to of Bitton in view of the construction of the knowledge graph in Thomas to include wherein the knowledge graph comprises activities as vertices and qualifiers for activities as adjacency relations with other activities; traversal logic, by testing adjacency relations within the particular timeframe, as taught in Wu, in order to realize the technical object with a complete and flexible modeling, and effectively integrated multi-sources and heterogeneous technical information.
Regarding claims 2, 9, and 16, Bitton in view of Thomas in further view of Wu teaches the limitations set forth above. Bitton further discloses:
further comprising, receiving, the signatures of risk indicators from a […] graph storing time series data corresponding to fraudulent users. [0049] In a pre-processing stage, those of the transitions whose referrer pages are identical and whose target pages are identical, may be merged. Interim reference is made to FIG. 5, which shows such exemplary merging. On the left are illustrated: First, a transition from page A to page B with a TOP of 5 seconds (namely, time on page B), represented by two vertices. A and B and a directed edge from A to B with an attribute TOP=5. Second, another transition from page A to page B with a TOP of 6 seconds, represented by two vertices A and B and a directed edge from A to B with an attribute TOP=6. These two transitions are merged into two vertices A and B, and a directed edge from A to B with the attributes TOP sum=11 and TOP count=2. [0091] In step 406, the classifier may be applied to the features computed in step 402 (and optionally also those computed in step 404), in order to infer a classification of the session in question as legitimate or fraudulent 312 (also of FIG. 3). Block 312 may also serve as a decision block, triggering further action if the session in question has been classified as fraudulent. For example, in step 408, a fraudulent browsing session may be terminated to prevent additional damage, and/or be reported to responsible personnel who can take actions to mitigate the fraudulent activity either in real time or after the fact.
While Bitton discloses the construction and analysis of a directed graph in order to make determinations about fraudulent activity of user website browsing sessions, the reference does not expressly disclose:
a knowledge graph
However Thomas teaches:
a knowledge graph [0081] In one embodiment, a “Narrative Generator” is employed by the Scenario Controller to traverse the Knowledge Graphs to facilitate the determination of the “next step” in the workflow specific to a customer's scenario (e.g., taking an action relating to an upcoming decision, processing additional documents, re-processing documents to complete a particular task based on updated information in the Knowledge Graphs, traversing the Knowledge Graphs to respond to internally or externally generated natural language queries, etc.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the analysis of the constructed graph for fraudulent behavior to of Bitton to include a knowledge graph, as taught in Thomas, in order to automate processes that are currently relegated to human judgment and intervention due to the difficulty of deriving meaning from context (paragraph 0006).
Regarding claims 7, 14 and 20, Bitton in view of Thomas in further view of Wu teaches the limitations set forth above. Bitton further discloses:
wherein the checkpoint comprises: user registration, user sign in, change in shipping address, change in email address, change in user device, or checkout. [0064] In addition, one or more of the following features of each browsing session may be computed: [0065] Time of day when the logon that initiated the session occurred. [0066] Day of the week when the logon that initiated the session occurred. [0067] Statistics as to different categories (e.g., transactions view, account information, credit cards, wire transfers, savings, securities trading, etc.) to which the pages belong: number of visits to pages of each category, total/mean/median/minimal/maximal TOP for each category, standard deviation of TOP in each category, and/or any other computable statistical measure of the categories.
Claims 3, 5, 6, 10, 12, 13, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bitton (US 20210400064) in view of Thomas (US 20250278644) in view of Wu (US 20240169214) in further view of Krishnamoorthy (US 20200175173).
Regarding claims 3, 10, and 17, Bitton in view of Thomas in further view of Wu teaches the limitations set forth above. While Bitton discloses the construction and analysis of a directed graph in order to make determinations about fraudulent activity of user website browsing sessions and Thomas teaches the use of user provided natural language queries to the system regarding the particulars of a scenario (to include fraud) which then is translated to traverse a Knowledge Graph (see [0090], [203] and [302]) the combination does not expressly disclose:
further comprising, receiving, at a user interface, a search query for signatures of risk indicators, wherein the signatures of risk indicators comprise a combination of signatures.
However Krishnamoorthy teaches:
further comprising, receiving, at a user interface, a search query for signatures of risk indicators, wherein the signatures of risk indicators comprise a combination of signatures. [0040] Next, the system defines a threat context indicator comprising a metric for identifying a vulnerability based on the unified cybersecurity ontology, as indicated by block 225. In this way, the system is configurable by the user who may say a threshold metric or definition for what constitutes a vulnerability Finally, as illustrated by block 230, the system presents the user with a management interface wherein the user can query the security vulnerability analysis and management platform to determine identified vulnerabilities.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the analysis of the constructed graph for fraudulent behavior to of Bitton in view of Thomas in further view of Wu to include further comprising, receiving, at a user interface, a search query for signatures of risk indicators, wherein the signatures of risk indicators comprise a combination of signatures, as taught in Krishnamoorthy, in order to determine identify issues or defects that present a security concern (paragraph 0045).
Regarding claims 5 and 12, Bitton in view of Thomas in further view of Wu teaches the limitations set forth above. While Bitton discloses the construction and analysis of a directed graph in order to make determinations about fraudulent activity of user website browsing sessions and Thomas teaches the use of user provided natural language queries to the system regarding the particulars of a scenario (to include fraud) which then is translated to traverse a Knowledge Graph (see [0090], [203] and [302]) the combination does not expressly disclose:
further comprising, enabling, at a user interface, a business user to dynamically specify the checkpoint in the user workflow.
However Krishnamoorthy teaches:
further comprising, enabling, at a user interface, a business user to dynamically specify the checkpoint in the user workflow. [0048] On the right side of FIG. 4, query system 410 and alert management system 412 are shown. Query system 410 may be used to query data from the UCO 404, library dependency tree 405, installed software dependency tree 406, bug and issues tree 407, threat intelligence context indicator 408 based on a fully customizable set of rules corresponding to certain information that a user may be interested in. For instance, the user may be interested in identified vulnerabilities with regard to a specific set of installed software libraries and how these installed software libraries interact with particular open source systems. The query system presents this tailored information to the user as a security knowledge graph as depicted in block 411. In some embodiments, the system may alert users preemptively about detected vulnerabilities or problems that have been identified. This capability is depicted as alert management system 412, and is also fully customizable such that the user may set alert parameters based on software systems, programs, relationships, severity of possible data leakage, and the like. The alert management system 412 encompasses a set of linkage processes 413 that determine where alerts should be sent based on a semantic web rule language (SWRL) that determines rules in terms of classes, properties, individuals and the like.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the analysis of the constructed graph for fraudulent behavior to of Bitton in view of Thomas in further view of Wu to include further comprising, enabling, at a user interface, a business user to dynamically specify the checkpoint in the user workflow, as taught in Krishnamoorthy, in order to determine identify issues or defects that present a security concern (paragraph 0045).
Regarding claims 6 and 13, Bitton in view of Thomas in further view of Wu teaches the limitations set forth above. While Bitton discloses the construction and analysis of a directed graph in order to make determinations about fraudulent activity of user website browsing sessions and Thomas teaches the use of user provided natural language queries to the system regarding the particulars of a scenario (to include fraud) which then is translated to traverse a Knowledge Graph (see [0090], [203] and [302]) the combination does not expressly disclose:
further comprising, enabling, at a user interface, a business user to dynamically search for the combination of signatures leading up to the checkpoint in the user workflow.
However Krishnamoorthy teaches:
further comprising, enabling, at a user interface, a business user to dynamically search for the combination of signatures leading up to the checkpoint in the user workflow. [0040] Next, the system defines a threat context indicator comprising a metric for identifying a vulnerability based on the unified cybersecurity ontology, as indicated by block 225. In this way, the system is configurable by the user who may say a threshold metric or definition for what constitutes a vulnerability Finally, as illustrated by block 230, the system presents the user with a management interface wherein the user can query the security vulnerability analysis and management platform to determine identified vulnerabilities.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the analysis of the constructed graph for fraudulent behavior to of Bitton in view of Thomas in further view of Wu to include further comprising, enabling, at a user interface, a business user to dynamically search for the combination of signatures leading up to the checkpoint in the user workflow, as taught in Krishnamoorthy, in order to determine identify issues or defects that present a security concern (paragraph 0045).
Regarding claim 19, the claim recites parallel language to that of claims 5 and 6 and therefore is also rejected under the combination of Bitton in view of Thomas in in view of Wu further view of Krishnamoorthy as set forth above.
Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bitton (US 20210400064) in view of Thomas (US 20250278644) in view of Wu (US 20240169214) in further view of Hu (US 20110112931).
Regarding claims 4, 11, and 18, Bitton in view of Thomas in further view of Wu teaches the limitations set forth above. While Bitton discloses the construction and analysis of a directed graph in order to make determinations about fraudulent activity of user website browsing sessions and Thomas teaches the use of user provided natural language queries to the system regarding the particulars of a scenario (to include fraud) which then is translated to traverse a Knowledge Graph (see [0090], [203] and [302]) the combination does not expressly disclose:
further comprising, aggregating a geolocation of a user device corresponding to the user workflow, a device type of the user device, and browser or application information corresponding to the user workflow.
However Hu teaches:
further comprising, aggregating a geolocation of a user device corresponding to the user workflow, [0107] In one implementation to detect fraud, the system verifies IP Geographic Location. The computer's IP address contains its geographic location from country, state/provide, down to city, when an IP address is detected by the system, the system look up the geographic location, and compares the geographic location with the address claimed by the customer, and both parameters can be processed by the index engine. Rules can be set accordingly, for example, if the information does not match, the system will reject the payment. a device type of the user device [0067] of hardware information such as the ID of computer hard disk drive, ID of microprocessor, ID of network card, ID of motherboard etc; and see [0061, 0063], and browser or application information corresponding to the user workflow (see for example application "Paypal" in Figure 11).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the analysis of the constructed graph for fraudulent behavior to of Bitton in view of Thomas in further view of Wu to include further comprising, aggregating a geolocation of a user device corresponding to the user workflow, a device type of the user device, and browser or application information corresponding to the user workflow, as taught in Hu, in order to help merchants identify frauds and provide additional protection to the authentication system of the credit card issuers or payment processors and minimizes losses should the credit card issuers or payment processors fail to detect and prevent fraud (paragraph 0018).
Related Art Not Cited
Motaharian (US 2020/0320619) generating graph that represents that represents a community of shared tradelines based on matches between attributes associated with tradelines such as account numbers or account type. A set of machine learning models can be trained using a training dataset to provide a set of rules that is optimized for evaluating the graph to detect synthetic identities.
Steiman (US 11693958) identifying anomalies using knowledge graphs
Response to Arguments
The examiner has withdrawn the rejection under 35 USC 101 with regards to signal per se in view of the claim amendments and corresponding support for the amendment in [0049] of the instant specification.
Applicant's arguments filed 12/30/2025 have been fully considered but they are not persuasive.
With respect to the remarks directed to 35 USC 101, the examiner first asserts the rejection has been updated above in view of the claim amendments. The claim does still recite an abstract idea as identified above. As to the integration of the judicial exception into a practical application, the examiner asserts the claims still do not recite significantly more to integrate the judicial exception into a practical application. Applicant points to [0014] and [0016] of the instant specification regarding the techniques mark improvement to fraud detection through the organization of the user activity as a knowledge graph stored in memory. The examiner asserts that the alleged improvement at most is an improvement to the abstract idea itself and not a technical improvement to a technical problem. The ability to concisely fetch results based on the organization is merely consequential to the chosen organizational structure of the information and does not improve the computer or memory itself.
Further per MPEP 2106.05 (f), the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015).
The dependent claims remain rejected as well for merely further limiting the abstract idea and do not integrate the judicial exception into a practical application.
With respect to the remarks directed to 35 USC 103, the rejection has been updated above and now also relies on Wu for the amended claim language. The examiner does also note as shown in the updated rejection above, that Bitton and Thomas do teach time series data (see [0040-0045] of Bitton and [0076] of Thomas), but Wu is relied upon for the teachings of the adjacency relations. For at least these reasons the claims remain rejected under 35 USC 103.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VICTORIA E. FRUNZI
Primary Examiner
Art Unit TC 3689
/VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 1/29/2026