DETAILED ACTION
This office action is re-opened in response to the Patent Broad Decision on 10/29/2025 based on the claim filed 12/04/2024.
Claims 1-20 are presented for examination.
Reopening of Prosecution
37 C.F.R. 1.198 Reopening after a final decision of the Patent Trial and Appeal Board:
When a decision by the Patent Trial and Appeal Board on appeal has become final for judicial review, prosecution of the proceeding before the primary examiner will not be reopened or reconsidered by the primary examiner except under the provisions of § 1.114 or § 41.50 of this title without the written authority of the Director, and then only for the consideration of matters not already adjudicated, sufficient cause being shown.
By signing below the Director authorizes reopening prosecution for consideration of the following matters not already adjudicated by the Board.
Prosecution of the Instant Application is hereby reopened in accordance with MPEP 1214.07, for matters that have not already been adjudicated, given sufficient cause being shown below.
This action is made Non-Final.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims (1, 2, 3, 5, 6, 7, 8, 9,11, 12, 13, 14, 15, 17, 18, 19, 20) are rejected under 35 U.S.C. 103 as being unpatentable over PASCARELLA et al. (Pub. No. US 20180373780-hereinafter, PASCARELLA) in view of Fang et al. (Pub. No. US 20200065814-hereinafter, Fang) and further in view of Chaine et al. (Patent. No. US 8626697 -hereinafter, Chaine).
Regarding to claim 1 Pascarella teaches a computer system comprising: one or more processing devices and at least one memory device operably coupled to the one or more processing devices, the one or more processing devices are configured to (Pascarella, [Par.0005], “According to one embodiment, the invention relates to a system that implements a database abstraction and data linkage engine. The system comprises: a central data repository that stores and maintains customer data; an interactive user interface that receives an input; and a computer processor, coupled to the memory component and the interactive interface, configured to perform the steps comprising: receiving, via the input, one or more attributes to form a basis for a network of connections having a predetermined number order representative of network size;”):
collect, recursively, from one or more external sources, additional attributes for a target entity identified using known attributes of the target entity ( PASCARELLA , [Fig.2, Par.0026-0030], “[0026], Step 210 represents an input of one or more attributes to build connections around. For example, the attributes may include IP addresses, email addresses, physical addresses, names, devices, phone numbers, accounts, internal identifier, etc. The attributes may be identified by a separate application (e.g., fraud application, fraud system, etc.) and provided electronically as an input. The attributes may be associated with a known bad activity. For example, the input may represent an account number having fraudulent charges, a name associated with a known fraudster, a phone number from where a fraudulent purchase or activity was made, etc. The input may also represent potentially suspicious activity or other event that meets a predetermined risk threshold. For example, a system may identify a potentially suspicious activity where one or more related attributes may be used to determine a network of connections. The potentially suspicious activity may be confirmed based on the network connections to other known or potentially fraudulent events, players, activities, etc. According to another example, a network may be created for research and analysis. For example, a new customer identifier may be researched to confirm good standing. As shown in FIG. 2, the system may receive one or more attributes as well as a group or category of attributes. The initial input may be any event, data, identifier, dataset, etc.”…[0027], At step 212, queries may be executed on a repository to extract activity relating to or involving the one or more reference attributes. Such activity may include online activity, demographic information, and account information associated with attribute. The repository may represent a central data repository as well as a plurality of repositories in a single location or across multiple locations. For example, the central data repository may represent internal sources (e.g., lines of business, etc.), external intelligent sources, and a combination thereof. External sources may also include credit score companies, merchants, service providers, government entities, third party investigations, media sources, etc.” Examiner’s note, system is receiving the data includes the known attributes (IP addresses, email addresses, physical addresses, names, devices, phone numbers, accounts,) associated with (target entity) known fraudster, and at step 212 the system collect/extract additional attributes (demographic information, and account information) associated with known attribute of the target entity (known fraudster). The system is collecting/retrieving associated attributes data is determined based on the databased search loops or the iteration, for example a determination may be made as to whether the system reached a defined number of database search loops or interactions. Therefore, the collection iteration is considered as the recursively collection, as it can be seen at [Par.0030].).
inject the known attributes and the additional attributes into one or more models including at least one of one or more machine learning models (PASCARELLA, [Par.0017], “For example, the innovative data abstraction engine may be linked to known bad actor data and then perform automated queries on this data to proactively alert potentially fraudulent activity. The data abstraction engine may also add other attributes and apply machine learning to the associations to more intelligently describe the returned network. And [par.0025], “FIG. 2 is an exemplary detailed flow diagram that illustrates database abstraction and data linkage, according to an embodiment of the present invention. Step 210 represents an input of one or more attributes to build connections around. At step 212, a query may be executed on a repository to extract activity relating to or involving the one or more reference attributes. At step 214, the system may retrieve customer data and associated attributes. At step 216, a determination may be made as to whether the system reached a defined number of database search loops or interactions. At step 218, the system may cleanse the data. At step 220, the system may create attribute datasets. If a defined number of database search loops have been reached, the system may then combine data from database queries, at step 230. Data analytics may be performed at step 232. At step 234, data may then be prepared for consumption by other software, analysts, receiving systems, applications, etc. At step 236, the system may generate an output via an interactive user interface. An embodiment of the present invention may be directed to implementing a machine learning engine, as represented by 250.” Examiner’s note, applying the machine learning model to the data including the additional attributes and the attributes of the known bad user.):
however, PASCARELLA does not teach one or more machine learning models trained using data associated with a plurality of known legitimate business entities and a plurality of illegitimate business entities, wherein the data comprises financial transactions, legal addresses, and legal entity names for the plurality of known legitimate business entities and the plurality of illegitimate business entities, wherein collection of the known attributes and the collection of additional attributes are executed in parallel;
On the other hand, Fang teaches one or more machine learning models trained using data associated with a plurality of known legitimate business entities and a plurality of illegitimate business entities (Fang, [Par.0075], “In some embodiments, the account classification module 132 may determine the threshold values based on empirical data. For example, the account classification module 132 may use historical account data associated with known fraudulent user account and non-fraudulent account to determine the threshold values. In some embodiments, the risk level determination module 206 may include, or utilize, a machine learning model to determine the risk level for the user account 530. The machine learning module may be implemented as an artificial neural network. The risk level determination module 206 may configure the machine learning model to take the one or more of the derived values as input values in the model, and configure the machine learning model to produce an output value corresponding to the risk level of the user account 530. The risk level determination module 206 may also train the machine learning model using the historic account data associated with known fraudulent user account and non-fraudulent account such that the machine learning model may be trained by continuously adjusting the various threshold values corresponding to the derived values (the input values to the machine learning model) to produce the output value.”),
wherein the data comprises financial transactions, legal addresses, and legal entity names for the plurality of known legitimate business entities and the plurality of illegitimate business entities.(Fang, [Par.0035], “The user device 110, in one embodiment, includes a user interface (UI) application 112 (e.g., a web browser), which may be utilized by the user 140 to conduct electronic transactions (e.g., selling, shopping, purchasing, bidding, etc.) with the service provider server 130 over the network 160.”, [Par.0039], “The identifier 114 may include one or more attributes related to the user 140 of the user device 110, such as personal information related to the user (e.g., one or more user names, passwords, photograph images, biometric IDs, addresses, phone numbers, social security number, etc.) and banking information and/or funding sources (e.g., one or more banking institutions, credit card issuers, user account numbers, security data and information, etc.).” And [Par.0075], “In some embodiments, the account classification module 132 may determine the threshold values based on empirical data. For example, the account classification module 132 may use historical account data associated with known fraudulent user account and non-fraudulent account to determine the threshold values.” Examiner’s note, the data comprising the transaction financial such as the selling, purchasing, bidding. The data associated with known fraudulent user account and non-fraudulent user account, wherein, the user account associated with the information about the name (legal name), address (legal address) of the user account.).
PASCARELLA and Fang are analogous in arts because they have the same filed of endeavor of identity the fraudulent of the transaction data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the collect, recursively, from one or more external sources, additional attributes for a target entity identified using known attributes of the target entity, inject the known attributes and the additional attributes into one or more models including at least one of one or more machine learning models, as taught by PASCARELLA, to include one or more machine learning models trained using data associated with a plurality of known legitimate business entities and a plurality of illegitimate business entities, wherein the data comprises financial transactions, legal addresses, and legal entity names for the plurality of known legitimate business entities and the plurality of illegitimate business entities, as taught by Fang. The modification would have been obvious because one of the ordinary skills in art would be motivated to detect the fraudulent before the fraudulent activity is happened, (Fang, [Par.0015], “This way, a new user account created by the malicious user who is associated with one or more known fraudulent accounts may be automatically detected even before the new user account is ever used to perform fraudulent activities.”).
However neither Fang nor PASCARELLA teaches wherein collection of the known attributes and the collection of additional attributes are executed in parallel,
On the other hand, Chaine teaches wherein collection of the known attributes and the collection of additional attributes are executed in parallel (Chaine, [Col.1], “In another aspect, first contextual data is received that characterizes behavioral attributes of a user visiting at least one web page. The first contextual data is collected by anonymously tracking interaction of the user with the at least one web page via a data collector embedded in the at least one web page. Thereafter, a series of a web services are initiated (in sequence or in parallel) to obtain additional information until a dominant attribute is identified. The additional information pertains to the user based on anonymously collected data other than the first contextual data. The dominant attribute is identified by determining which attributes among a plurality of pre-defined attributes are present for the user based on the first contextual data and the additional information and determining whether any of such attributes is a dominant attribute... first contextual data characterizing behavioral attributes of a user visiting at least one web page of a website is received. The first contextual data is collected by anonymously tracking interaction of the user with the at least one web page via a data collector embedded in the at least one web page. Second contextual data is also received that characterizes non-behavioral attributes of the user.”).
PASCARELLA , Fang and Chaine are analogous in arts because they have the same filed of endeavor of identity the data related to the user.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the collect, recursively, from one or more external sources, additional attributes for a target entity identified using known attributes of the target entity, as taught by PASCARELLA, to include the wherein collection of the known attributes and the collection of additional attributes are executed in parallel, as taught by Chaine. The modification would have been obvious because one of the ordinary skills in art would be motivated to identify the dominant account (Chaine, [Col.1], “In another aspect, first contextual data is received that characterizes behavioral attributes of a user visiting at least one web page. The first contextual data is collected by anonymously tracking interaction of the user with the at least one web page via a data collector embedded in the at least one web page. Thereafter, a series of a web services are initiated (in sequence or in parallel) to obtain additional information until a dominant attribute is identified. The additional information pertains to the user based on anonymously collected data other than the first contextual data. The dominant attribute is identified by determining which attributes among a plurality of pre-defined attributes are present for the user based on the first contextual data and the additional information and determining whether any of such attributes is a dominant attribute... first contextual data characterizing behavioral attributes of a user visiting at least one web page of a website is received. The first contextual data is collected by anonymously tracking interaction of the user with the at least one web page via a data collector embedded in the at least one web page. Second contextual data is also received that characterizes non-behavioral attributes of the user.”).
Regarding claim 2, PASCARELLA teaches the system of claim 1, wherein the one or more processing devices are further configured to enrich the known attributes with the additional attributes, thereby generating enriched target entity data (Pascarella, [Par.0040-0041], “According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc. The system may also provide details concerning datasets, at 316. Output 318 may illustrate a resulting network having a predetermined number order. The system may provide a training feature, through Train 320. This provides additional learning of networks and known bad events to further refine the accuracy of the system. Train 320 may also provide the ability to generate models for fraud prediction. [0041] FIG. 4 illustrates an exemplary illustration of a network, according to an embodiment of the present invention. The graphic shown in FIG. 4 is just one exemplary illustration that is simplified. Other formats and depictions of networks may be provided. In FIG. 4, Attribute 410 may represent an input attribute. For each iteration, an order of network may be generated. As shown, a first order network is shown by the nodes labeled “1.” Each node may represent an attribute, dataset and/or other data. With each iteration, additional associations may be identified. The example of FIG. 4 shows a 7.sup.th order network.”)
Regarding claim 3, PASCARELLA teaches the system of claim 2, wherein the one or more processing devices are further configured to: use one or more recursive analysis techniques on one or more of the known attributes and the additional attributes ( PASCARELLA , [Par.0025], “At step 214, the system may retrieve customer data and associated attributes. At step 216, a determination may be made as to whether the system reached a defined number of database search loops or interactions. At step 218, the system may cleanse the data. At step 220, the system may create attribute datasets. If a defined number of database search loops have been reached, the system may then combine data from database queries, at step 230. Data analytics may be performed at step 232. At step 234, data may then be prepared for consumption by other software, analysts, receiving systems, applications, etc.” and [Par.0028], “At step 214, the system may retrieve customer data and associated attributes. For example, the input attribute may be associated with a customer identifier. The customer identifier may then be used to generate additional attributes. For example, a customer identifier may be associated with household members. The customer identifier may also identify former and past identifiers, accounts and even closed or dormant accounts…[Par.0030], “At step 216, a determination may be made as to whether the system reached a defined number of database search loops or iterations.” , [Par.0033], “Other relevant information from various sources, including external and third party sources, may be identified and combined at step 230.” And [Par.0040], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc.”).
Regarding claim 5, PASCARELLA teaches the system of claim 1, wherein the one or more processing devices are further configured to: but it does not teach discover at least one legal name and at least one address to identify the target entity, on the other hand, Fang teaches discover at least one legal name and at least one address to identify the target entity (Fang, [Par.0016-0017], “Once the known fraudulent accounts are identified, various attributes of the known fraudulent accounts may be obtained and stored, such as in a database. Example attribute types that are obtained for a known fraudulent account may include at least one of a device identifier (e.g., a media access control (MAC) address, a serial number of a device, etc.) of a device used to access the known fraudulent account, a browser type used to access the known fraudulent account, an Internet Protocol (IP) address associated with the device used to access the known fraudulent account, a physical address, a phone number, an identifier of a funding source (e.g., a hash value representing a bank account number, a hash value representing a credit card account number, etc.), a name, an e-mail address, an item description of an item posted for sale through the known fraudulent account, an account number of an account to an affiliated service provider (e.g., an online marketplace website, etc.), a transaction history, and/or other information of the known fraudulent account…[0017] When user accounts (e.g., new seller accounts) are created through the service provider, the service provider may evaluate each particular user account by comparing the attributes of the particular user account to the attributes of the known fraudulent accounts to determine a risk level for the particular user account. The risk level may indicate a likelihood that the particular user account corresponds to a fraudulent account. …0017, “When user accounts (e.g., new seller accounts) are created through the service provider, the service provider may evaluate each particular user account by comparing the attributes of the particular user account to the attributes of the known fraudulent accounts to determine a risk level for the particular user account. The risk level may indicate a likelihood that the particular user account corresponds to a fraudulent account.” Examiner’s note, the system identifies/discovers attributes (the name, physical address or other information) of the known fraudulent account to identify if the new user account is fraudulent or not by comparing the known fraudulent account’s attributes and the particular user account’s attributes.).
PASCARELLA and Fang are analogous in arts because they have the same filed of endeavor of identity the fraudulent of the transaction data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the system of the claim 1, as taught by PASCARELLA, to include discover at least one legal name and at least one address to identify the target entity, as taught by Fang. The modification would have been obvious because one of the ordinary skills in art would be motivated to detect the fraudulent before the fraudulent activity is happened, (Fang, [Par.0015], “This way, a new user account created by the malicious user who is associated with one or more known fraudulent accounts may be automatically detected even before the new user account is ever used to perform fraudulent activities.”).
Regarding claim 6, PASCARELLA teaches the system of claim 3, wherein the one or more processing devices are further configured to: generate, subject to the one or more recursive analyses, additional information with respect to the target entity( PASCARELLA , [Par.0025], “At step 214, the system may retrieve customer data and associated attributes. At step 216, a determination may be made as to whether the system reached a defined number of database search loops or interactions. At step 218, the system may cleanse the data. At step 220, the system may create attribute datasets. If a defined number of database search loops have been reached, the system may then combine data from database queries, at step 230. Data analytics may be performed at step 232. At step 234, data may then be prepared for consumption by other software, analysts, receiving systems, applications, etc.” and [Par.0028], “At step 214, the system may retrieve customer data and associated attributes. For example, the input attribute may be associated with a customer identifier. The customer identifier may then be used to generate additional attributes. For example, a customer identifier may be associated with household members. The customer identifier may also identify former and past identifiers, accounts and even closed or dormant accounts…[Par.0030], “At step 216, a determination may be made as to whether the system reached a defined number of database search loops or iterations.” , [Par.0033], “Other relevant information from various sources, including external and third party sources, may be identified and combined at step 230.” And [Par.0040], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc.” Examiner’s note, the collecting of the additional attribute (the attributes related to the known attribute (IP address) associated with the known fraudulent account) for the target entity (known fraudulent account) based on the known attributes of the target entity (known fraudulent account). The collection iteration is determined based on the databased search loops or the iteration, for example a determination may be made as to whether the system reached a defined number of database search loops or interactions. Therefore, the collection iteration is considered as the recursively collection.).
Regarding claim 7, PASCARELLA teaches the system of claim 1, wherein the one or more processing devices are further configured to: train the one or more models comprising (PASCARELLA, [Par.0025], “. Data analytics may be performed at step 232. At step 234, data may then be prepared for consumption by other software, analysts, receiving systems, applications, etc. At step 236, the system may generate an output via an interactive user interface. An embodiment of the present invention may be directed to implementing a machine learning engine, as represented by 250. The order illustrated in FIG. 2 is merely exemplary. While the process of FIG. 2 illustrates certain steps performed in a particular order, it should be understood that the embodiments of the present invention may be practiced by adding one or more steps to the processes, omitting steps within the processes and/or altering the order in which one or more steps are performed.”)
identify a plurality of known business entities (PASCARELLA, [Par.0040], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc.” Examiner’s note, the collecting of the additional attribute (the attributes related to the known attribute (IP address) associated with the known fraudulent account) for the target entity (known fraudulent account) based on the known attributes of the target entity (known fraudulent account).);
collect known attributes of the plurality of business entities ((PASCARELLA, [Par.0040], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc.” Examiner’s note, the collecting of the additional attribute (the attributes related to the known attribute (IP address) associated with the known fraudulent account) for the target entity (known fraudulent account) based on the known attributes of the target entity (known fraudulent account).);
query the one or more external sources for additional attributes of the known business entities (PASCARELLA, [Par.0040, 0032-0033], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc.” Examiner’s note, the collecting of the additional attribute (the attributes related to the known attribute (IP address) associated with the known fraudulent account) for the target entity (known fraudulent account) based on the known attributes of the target entity (known fraudulent account).);
collect, from the one or more external sources, the additional attributes of the known business entities ((PASCARELLA, [Par.0040, 0032-0033], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc.” Examiner’s note, the collecting of the additional attribute (the attributes related to the known attribute (IP address) associated with the known fraudulent account) for the target entity (known fraudulent account) based on the known attributes of the target entity (known fraudulent account)..);
enrich the known attributes with the additional attributes, thereby generating enriched training data (Pascarella, [Par.0040-0041], “According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc. The system may also provide details concerning datasets, at 316. Output 318 may illustrate a resulting network having a predetermined number order. The system may provide a training feature, through Train 320. This provides additional learning of networks and known bad events to further refine the accuracy of the system. Train 320 may also provide the ability to generate models for fraud prediction. [0041] FIG. 4 illustrates an exemplary illustration of a network, according to an embodiment of the present invention. The graphic shown in FIG. 4 is just one exemplary illustration that is simplified. Other formats and depictions of networks may be provided. In FIG. 4, Attribute 410 may represent an input attribute. For each iteration, an order of network may be generated. As shown, a first order network is shown by the nodes labeled “1.” Each node may represent an attribute, dataset and/or other data. With each iteration, additional associations may be identified. The example of FIG. 4 shows a 7.sup.th order network.”);
analyze the enriched training data, thereby generating analysis results training data (Pascarella, [0040] According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc. The system may also provide details concerning datasets, at 316. Output 318 may illustrate a resulting network having a predetermined number order. The system may provide a training feature, through Train 320. This provides additional learning of networks and known bad events to further refine the accuracy of the system. Train 320 may also provide the ability to generate models for fraud prediction.”);
and inject the analysis results training data into the one or more models (Pascarella, [Par.0017], For example, the innovative data abstraction engine may be linked to known bad actor data and then perform automated queries on this data to proactively alert potentially fraudulent activity. The data abstraction engine may also add other attributes and apply machine learning to the associations to more intelligently describe the returned network.” And [0040] According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc. The system may also provide details concerning datasets, at 316. Output 318 may illustrate a resulting network having a predetermined number order. The system may provide a training feature, through Train 320. This provides additional learning of networks and known bad events to further refine the accuracy of the system. Train 320 may also provide the ability to generate models for fraud prediction.).
However, Pascarella does not teach wherein the one or more models are trained to generate a score at least partially indicative of legitimate business entities and illegitimate business entities,
On the other hand, Fang teaches wherein the one or more models are trained to generate a score at least partially indicative of legitimate business entities and illegitimate business entities (Fang, [Par. 0017,0025-0026], “0017, The risk level may indicate a likelihood that the particular user account corresponds to a fraudulent account...” And 0025-0026 “Since the particular user account shares the phone number attribute and the name attribute with only the first known fraudulent user account, the account classification system may derive the loss values corresponding to the phone number attribute and the name attribute, respective, based solely on the weights assigned to the first known fraudulent user account (e.g., 200). Since the particular user account shares the bank account number attribute and the device identifier attribute with only the second known fraudulent user account, the account classification system may derive the loss values corresponding to the bank account number attribute and the device identifier attribute, respective, based solely on the weights assigned to the second known fraudulent user account (e.g., 300). This way, the attribute type that is shared with more known fraudulent user accounts will carry a larger weight in determining the risk level than the attribute type that is shared with less known fraudulent user accounts. [0026] The account classification system may then use the derived values (including the derived loss values corresponding to the different shared attribute types) to determine the risk level for the particular user account. In some embodiments, the account classification system may determine the risk level for the particular user account by comparing the derived values to a set of predetermined threshold values. In one example, the account classification system may configure a machine learning model (e.g., an artificial neural network) to take the derived loss values as input values to produce an output value that indicate the risk level for the particular user account. The account classification system may train the machine learning model based on historic data regarding accounts previously created that have been determined as either fraudulent accounts or non-fraudulent accounts to determine the different threshold values corresponding to the different attribute types.” Examiner’s note, the risk level is considered as the probability score, because and the risk level is calculated by the machine learning model, the risk level indicating the likelihood that the particular account is fraudulent or not. Furthermore, the machine learning model continuously adjust the threshold value based on the historic data associated with known fraudulent user account and non-fraudulent user account in order to determine the particular user account is fraudulent or not. Therefore, some of user account is non-fraudulent and some user account may be fraudulent, as it can be seen at [Par.0075], “…For example, the account classification module 132 may use historical account data associated with known fraudulent user account and non-fraudulent account to determine the threshold values... The machine learning module may be implemented as an artificial neural network. The risk level determination module 206 may configure the machine learning model to take the one or more of the derived values as input values in the model, and configure the machine learning model to produce an output value corresponding to the risk level of the user account 530. The risk level determination module 206 may also train the machine learning model using the historic account data associated with known fraudulent user account and non-fraudulent account such that the machine learning model may be trained by continuously adjusting the various threshold values corresponding to the derived values (the input values to the machine learning model) to produce the output value.”).
PASCARELLA and Fang are analogous in arts because they have the same filed of endeavor of identity the fraudulent of the transaction data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the and inject the analysis results training data into the one or more models, as taught by PASCARELLA, to include wherein the one or more models are trained to generate a score at least partially indicative of legitimate business entities and illegitimate business entities, as taught by Fang. The modification would have been obvious because one of the ordinary skills in art would be motivated to detect the fraudulent before the fraudulent activity is happened, (Fang, [Par.0015], “This way, a new user account created by the malicious user who is associated with one or more known fraudulent accounts may be automatically detected even before the new user account is ever used to perform fraudulent activities.”).
Regarding claim 8, it is being rejected for the same reason as the claim 1, because these claims recite the same limitations.
Regarding claim 9, Pascarella teaches the computer program product of claim 8, further comprising: program instructions to enrich the known attributes with the additional attributes, thereby generating enriched target entity data (Pascarella, [Par.0040-0041], “According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc. The system may also provide details concerning datasets, at 316. Output 318 may illustrate a resulting network having a predetermined number order. The system may provide a training feature, through Train 320. This provides additional learning of networks and known bad events to further refine the accuracy of the system. Train 320 may also provide the ability to generate models for fraud prediction. [0041] FIG. 4 illustrates an exemplary illustration of a network, according to an embodiment of the present invention. The graphic shown in FIG. 4 is just one exemplary illustration that is simplified. Other formats and depictions of networks may be provided. In FIG. 4, Attribute 410 may represent an input attribute. For each iteration, an order of network may be generated. As shown, a first order network is shown by the nodes labeled “1.” Each node may represent an attribute, dataset and/or other data. With each iteration, additional associations may be identified. The example of FIG. 4 shows a 7.sup.th order network.”););
and program instructions to use one or more recursive analysis techniques on one or more of the known attributes and the additional attributes (PASCARELLA , [Par.0025], “At step 214, the system may retrieve customer data and associated attributes. At step 216, a determination may be made as to whether the system reached a defined number of database search loops or interactions. At step 218, the system may cleanse the data. At step 220, the system may create attribute datasets. If a defined number of database search loops have been reached, the system may then combine data from database queries, at step 230. Data analytics may be performed at step 232. At step 234, data may then be prepared for consumption by other software, analysts, receiving systems, applications, etc.” and [Par.0028], “At step 214, the system may retrieve customer data and associated attributes. For example, the input attribute may be associated with a customer identifier. The customer identifier may then be used to generate additional attributes. For example, a customer identifier may be associated with household members. The customer identifier may also identify former and past identifiers, accounts and even closed or dormant accounts…[Par.0030], “At step 216, a determination may be made as to whether the system reached a defined number of database search loops or iterations.” , [Par.0033], “Other relevant information from various sources, including external and third party sources, may be identified and combined at step 230.” And [Par.0040], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc.”).).
Regarding claim 11, it is being rejected for the same reason as the claim 6, because these claims recite the same limitations.
Regarding claim 12, it is being rejected for the same reason as the claim 7, because these claims recite the same limitations.
Regarding claim 13, it is being rejected for the same reason as the claim 1, because these claims recite the same limitations.
Regarding claim 14, it is being rejected for the same reason as the claim 2, because these claims recite the same limitations.
Regarding claim 15, it is being rejected for the same reason as the claim 3, because these claims recite the same limitations.
Regarding claim 17, it is being rejected for the same reason as the claim 5, because these claims recite the same limitations.
Regarding the claim 18, PASCARELLA teaches The method of claim 13, wherein collecting, from the one or more external sources, the additional attributes of the target entity comprises:gathering information, with respect to the target entity, directed toward one or more of:relationships to one or more other entities;relationships to one or more individuals;relationships to one or more addresses;records of financial transactions;registration with one or more government bodies;one or more issued certifications;one or more owned real property assets;one or more intellectual property assets;one or more associated websites;one or more social media accounts;public trading data; and government-issued watch list data (PASCARELLA , [Par.0033], Other relevant information from various sources, including external and third party sources, may be identified and combined at step 230. and [Par.0040], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc. The system may also provide details concerning datasets, at 316. Output 318 may illustrate a resulting network having a predetermined number order.”)
Regarding claim 19, it is being rejected for the same reason as the claim 6, because these claims recite the same limitations.
Regarding claim 20, it is being rejected for the same reason as the claim 7, because these claims recite the same limitations.
Claims (4, 10 and 16) are rejected under 35 U.S.C. 103 as being unpatentable over PASCARELLA et al. (Pub. No. US 20180373780-hereinafter, PASCARELLA) in view of Fang et al. (Pub. No. US20200065814-hereinafter, Fang) and further in view of Chaine further in view of Chaine et al. (Patent. No. US 8626697 -hereinafter, Chaine). and further in view of LOUIE et al. (Pub. No. 20190222598-hereinafter, LOUIE).
Regarding claim 4, PASCARELLA teaches the system of claim 1, wherein the one or more processing devices are further configured to: generate, within a database, a query directed toward data for the target entity (PASCARELLA , [Par.0033], Other relevant information from various sources, including external and third party sources, may be identified and combined at step 230. and [Par.0040], “ According to an embodiment of the present invention, an input, as shown by 312, may include an attribute to build a connection around. For example, the input may include an IP address that is known (or suspected) to be associated with a fraudulent activity or a potential bad act. The tool may receive the IP address and then automatically identify various connections based on the IP address. For example, the tool may link to the IP address and gather different associated customers and their attributes that are associated with the IP address. The system may also identify sources of data, e.g., internal sources, external sources, third party sources, etc. The system may also provide details concerning datasets, at 316. Output 318 may illustrate a resulting network having a predetermined number order.”).
However, neither PASCARELLA nor Fang teaches determine that data for the target entity is not resident in the database.
On the other hand, LOUIE teaches determine that data for the target entity is not resident in the database (LOUIE, [Par.0070], “In step 414, an identifier of the tag user is checked to see if the tag user is known to the authorized user (e.g., the publisher). When the identifier is recognized as an unknown tag user (e.g., the identifier is not found in the database 34), control proceeds to step 416. Otherwise, control proceeds to step 418. In step 416, an account number of the tag user is checked for authenticity and validity. In step 418, one or more nomenclature codes associated with the tag, such as keywords in a Uniform Resource Locator (URL), are searched for subject category matches. In step 420, a subject category fit is determined based on the subject category matches. For example, when a subject category of “sports” is associated with the tag user identified as a sport equipment dealer, the category fit is valid. However, when a subject category of “entertainment” is associated with the tag user identified as a non-entertainment entity, the category fit is invalid. Other suitable subject category matches are also contemplated to suit different applications.”).
PASCARELLA , Fang, and LOUIE are analogous in arts because they have the same filed of endeavor of identifying the fraudulent or abnormal activity.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modified the generating, within a database, a query directed toward data for the target entity, as taught by PASCARELLA, to include, the determining that data for the target entity is not resident in the database, as taught by LOUIE. The modification would have been obvious because one of the ordinary skills in art would be motivated to improve the digital auditing system and effectively detect unauthorizes activities performed on the websites, (LOUIE, [Par.0007-0008], “As such, there are opportunities to develop an improved digital auditing system and method that can effectively detect unauthorized activities performed on the websites for sustaining reliable business transactions and militating against fraud and illegal activities. Advantages are achieved by the present digital auditing system or method which includes various modules and an improved database for storing specific information relating to unauthorized operational activities in corresponding websites. The present digital auditing system further includes a computer processor coupled to databases and programmed to perform particular tasks and display relational information of the unauthorized operational activities.”).
Regarding claim 10, it is being rejected for the same reason as the claim 4, because these claims recite the same limitations.
Regarding claim 16, it is being rejected for the same reason as the claim 4, because these claims recite the same limitations.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure is provide below.
Guo et al. (Pub. No.: US20200394707-hereinafter, Guo) teaches the system to identify the online money laundering customer group based on the transaction records.
Ferranti et al. (Pub. No.: US2019/0164172-hereinafter, Ferranti) teaches the system to alter the geographic risk and money laundering based on the public information source data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EM N TRIEU whose telephone number is (571)272-5747. The examiner can normally be reached on Mon-Fri from 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.T./Examiner, Art Unit 2128
/OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
/CORDELIA P ZECHER/Director, TC 2100