DETAILED ACTION
This is in response to Applicant’s Request for Continued Examination filed on 12/17/2025 for the application 17/055,996.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, and 4-19 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claims 1, 10-11 require “a first service provided by a first server provider” and “a second service provided by a second service provider”. Examiner was able to find support for different services, however not that the different services are provided by different provider. The most related paragraphs of the Published patent application are ¶32, ¶¶38-40 that discuss multiple service that can provide different services. However, multiple services could be provided by a single provider and nothing disclose that the services are provided by different providers. In other words presence of multiple services or multiple servers to provide services or multiple service providing systems does not require the presence of different service provider to provide the different services.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-19 are rejected under 35 U.S.C. 103 as being unpatentable over PCT WO2019049210A1. For convenience, the examiner will cite to PGP 20200272849 A1, the US counterpart to the PCT app for convenience [hereinafter D1] in view of Larson et al. [US 20200366671 A1, hereinafter Larson].
As to claim 1,
D1 teach a fraud estimation system, comprising at least one processor configured to:
store a learning model that has learned a relationship between binary comparison results that indicate whether a first user information of a user in a first service is a match to a second user information of a fraudulent user or an authentic user in a second service and determining a presence or absence of fraudulence in the first service (¶¶45-46, “model generation device 22 is, for example, a computer configured to execute learning of a machine learning model using learning data managed by the learning data management device 20. A machine learning model (learned model), based on which learning using learning data has been executed, is stored in the model storage device 24”, ¶47, “The score value determination device 16 acquires a learned model stored in the model storage device 24, and determines a score value by using the acquired learned model”, ¶59, “ feature extraction device 14 generates, based on, for example, the target order data and past order data stored in the feature extraction device 14, a feature vector representing a feature associated with the target order data. Data representing a comparison result …”);
obtain a first binary comparison result that indicates whether user information of a target user in the first service matches user information of a fraudulent user or an authentic user in the second service (¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated”, );
input the first binary comparison result into the learning model and obtain an output from the learning model (¶59, “the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated. The feature vector associated with the target order data may also be generated based on the value of the target order data and data representing the comparison result”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model”, comparison result in the feature vector and input to model);
estimate fraudulence of the target user based on the output from the learning model (¶61, “larger score value may be determined for an order that has a higher possibility of being a fraudulent order”);
wherein the first binary comparison result is either affirmative or negative (Fig. 4, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated”, comparison result between attributes match/non-match (i.e. affirmative or negative));
wherein the learning model has learned a relationship between a plurality of comparison results respectively corresponding to a plurality of other services and the presence or absence of fraudulence in the first service (¶59, “the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated. The feature vector associated with the target order data may also be generated based on the value of the target order data and data representing the comparison result”, ¶45, “the model generation device 22 is, for example, a computer configured to execute learning of a machine learning model using learning data”, multiple orders and attributes),
wherein the at least one processor is configured to obtain a first plurality of comparison results respectively corresponding to the plurality of other services, wherein the at least one processor is configured to obtain output from the learning model based on the plurality of comparison results (Fig. 1, 14a,14b, …, processors, ¶¶41-45,¶59, “feature extraction device 14 generates, based on, for example, the target order data and past order data stored in the feature extraction device 14, a feature vector representing a feature associated with the target order data. Data representing a comparison result”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model”);
wherein the learning model has further learned a relationship between a utilization situation (session usage feature) in the first service and the presence or absence of fraudulence in the first service (Fig. 4, ¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶59, “feature extraction device 14 generates, based on, for example, the target order data and past order data stored in the feature extraction device 14, a feature vector representing a feature associated with the target order data …”, ¶45, “the model generation device 22 is, for example, a computer configured to execute learning of a machine learning model using learning data”, ¶62-64, ),
wherein the at least one processor is configured to obtain a utilization situation of the first service by the target user, and wherein the at least one processor is configured to obtain output from the learning model based on the utilization situation by the target user (¶¶58-59, “feature extraction device 14 generates, based on, for example, the target order data and past order data stored in the feature extraction device 14, a feature vector representing a feature associated with the target order data …”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model”);
wherein the at least one processor is configured to limit the use of the first service, based on an IP address associated with target user (¶55, “… FIG. 3, the order data includes an order ID, a user ID, IP address data”), by the target user when the target user is estimated as fraudulent; and permit the use of the first service by the target user when the target user is estimated as not fraudulent (¶63, “1 is set as the result data value for an order determined to be a fraudulent order, and 0 is set as the result data value for an order determined not to be a fraudulent order”, ¶91, “when the value of the result data associated with the target order data is 0, the electronic commerce system 10 may proceed with the order processing for the order associated with the target order data as a valid order. As another example, when the value of the result data associated with the target order data is 1, the electronic commerce system 10 may stop the order associated with the target order data”).
D1 does not explicitly teach that
Larson teach a learning model (¶218, “FIG. 66 illustrates an example NN 6600 suitable for use by the IVS and/or related services“, ¶219, “ NN 6600 may represent one or more ML models that are trained using training data”, ¶228, “ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors, and/or perform various identity verification tasks”), that has learned a relationship between binary comparison results (¶, 219, “ML algorithms build or estimate mathematical model(s) (referred to as “ML models,” “models,” or the like) based on sample data (referred to as “training data,” “model training information,” or the like) in order to make predictions, inferences, or decisions”, ¶230, ¶243, “DII is trained on the DIN data to detect behaviors that deviate from trusted digital identity behaviors”, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies that might indicate the use of stolen identity data, for example a mismatch between devices and locations or identity information usually associated with a digital identity”, ¶226, “The output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected, and so forth)”, ¶238, ¶253, “Information reviewed includes comparisons of data that should be associated with other data elements (good if they are, bad if they are not) …”) that indicate whether a first user information of a user in a first service provided by a first service provider (¶112, “where a user is attempting to verify their identity for a financial transaction, the IVS 140 may tie a name on the user's credit card to the name/dentity being authenticated”, ¶230, “Users (enrollees or authenticated users) start their Proven Identity journey through a rapid authentication process”, ¶241, “ IVS provides improved customer satisfaction—Fast, easy and secure accessing of the user's account information (e.g., financial, telecom accounts, etc.)“, ¶228, “ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors, and/or perform various identity verification tasks as discussed herein. After the ML models are trained, the ML models may be utilized for the various services”) is a match to a second user information of a fraudulent user or an authentic user in a second service provided by a second service provider (¶242, “IVS cross-references that information with various identity databases and systems”, ¶¶4-5, “Businesses or government agencies may verify the identity of the real person using identity information … or they may verify identity information against authoritative sources (e.g., credit bureaus, government database(s), corporate database(s), etc.)”, ¶16, ¶18, “other information/data is used to detect fraudulent activity or otherwise determine a likelihood of fraudulent activity. For example, the geolocation and other location information may be compared against a list of location data of known fraudsters …”, ¶29, “some or all of the identity verification services may be provided by or accessed from third party systems/services, and in some of these embodiments, the information provided by the third party systems/services may be enhanced or amended using information collected by the IVS …”, ¶42, ¶55, ¶74, “ IVS servers 145 may provide interfaces that allow the client system 105B to access captured biometric and/or identity data, revise or comment on individual data items, and/or search various databases within or outside of the IVS 140 for various information/data about applicants/enrollees”, ¶92, ¶243, “IVS is also powered by shared intelligence from over 40,000 websites and apps across industries and geographies to recognize the one unique digital identity associated with every Applicant. Using AI and ML, the IVS tracks authenticity metrics and reputational integrity to separate synthetic and fraudulent identities in real time”, ¶262, “compiled directly from thousands of reliable and trusted sources. This includes all national consumer credit reporting agencies, online, utility …”, ¶248) and determining a presence or absence of fraudulence in the first service (¶228, “ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors, and/or perform various identity verification tasks as discussed herein. After the ML models are trained, the ML models may be utilized for the various services”, ¶243, “IVS is also powered by shared intelligence from over 40,000 websites and apps … Using AI and ML, the IVS tracks authenticity metrics and reputational integrity to separate synthetic and fraudulent identities in real time”, ¶241, “IVS reduces identity theft, fraud, and associated costs—this is extremely valuable to corporations and businesses as identity theft is consistently the leading complaint filed with the Federal Trade Commission”);
wherein the at least one processor is configured to limit the use of the first service, based on an IP address associated with target user (¶17, “, other location information associated with the user's device (e.g., location based on IP addresses even if hidden behind hidden proxies and VPNs) “, ¶40, “ client application 110 may collect various data from the client system 105A without direct user interaction with the client application … an IP address of the client system“, ¶¶41-42, “other location information (e.g., using triangulation, LTE/5G location services, WiFi positioning, IP address location correlations, etc.); comparing biographic and/or user agent data against a list of known fraudsters listed in one or more blacklists; time that the user's identity information has existed, for example, to detect recently established identities that are typically fraudsters; identify known associates of the user and whether or not the known associates are associated with high fraud incidences”, ¶94, ¶242, “VS uses the largest and richest global repository of online digital identity data in the world to filter through over 600,000 known physical addresses, 700,000 unique IP addresses …”), by the target user when the target user is estimated as fraudulent; and permit the use of the first service by the target user when the target user is estimated as not fraudulent (¶20, “IVS also prevents identity theft and other fraudulent activities by identifying the tactics used by identity thieves and other malicious actors, and blocks the fraudulent activities and/or notifies potential victims of the fraudulent activities”).
D1 and Larson are analogous art to the claimed invention because they are from a similar field of endeavor of fraud prevention process in particular, to identity verification and information security technologies. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify D1 resulting in resolutions as disclosed by Larson with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify D1 as described above to allow individual user to update and enhance the completeness of their identity profiles for a more seamless identity verification process when attempting to obtain products or services from third party service providers, and for enhancing user privacy and preventing identity theft or other malicious identity-based abuses (Larson ¶14).
As to claim 4,
D1-Larson teach the fraud estimation system according to claim 1,
wherein, in the first service, fraudulence is estimated based on user information of a predetermined item (D1, ¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, Larson, ¶103, “IVS 140 determines, based on the scanned biometric data, that the user is attempting to verify/authenticate their identity for accessing services provided by an SPP 120 (e.g., a financial institution, etc.)”, ¶137, “ verify his/her identity for completing a money transfer using a separate mobile banking application”, ¶112, “ IVS 140 does not authenticate the user just because they have an enrolled identity and are now trying to complete a transaction under a different identity. In these embodiments, the user may register or otherwise store various payment cards (e.g., credit or debit cards) with the IVS 140, and the IVS 140 may match them to the user's identity since accounts at financial institutions or other business may use a variety of names for the same person”, “IVS 140 may tie a name on the user's credit card to the name/dentity being authenticated”), and
wherein the utilization situation is a utilization situation about the predetermined item (¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶61, utilization situation is the current user session that include product ID, quantity ID, Larson, ¶112, “ IVS 140 does not authenticate the user just because they have an enrolled identity and are now trying to complete a transaction under a different identity. In these embodiments, the user may register or otherwise store various payment cards (e.g., credit or debit cards) with the IVS 140, and the IVS 140 may match them to the user's identity since accounts at financial institutions or other business may use a variety of names for the same person”, “IVS 140 may tie a name on the user's credit card to the name/dentity being authenticated”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 5,
D1-Larson teach the fraud estimation system according to claim 1,
wherein, in the first service and the second service each, a plurality of items of user information are registered (D1, ¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶¶58-59, “feature extraction device 14 generates, based on, for example, the target order data and past order data stored in the feature extraction device 14, a feature vector representing a feature associated with the target order data …”, Larson, ¶112, “ IVS 140 does not authenticate the user just because they have an enrolled identity and are now trying to complete a transaction under a different identity. In these embodiments, the user may register or otherwise store various payment cards (e.g., credit or debit cards) with the IVS 140, and the IVS 140 may match them to the user's identity since accounts at financial institutions or other business may use a variety of names for the same person”, ¶¶134-135),
wherein the learning model has learned relationships between a plurality of comparison results respectively corresponding to the plurality of items and the presence or absence of fraudulence in the first service (D1, ¶59, “the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated. The feature vector associated with the target order data may also be generated based on the value of the target order data and data representing the comparison result”, ¶45, “the model generation device 22 is, for example, a computer configured to execute learning of a machine learning model using learning data”, Larson, ¶243, “IVS is also powered by shared intelligence from over 40,000 websites and apps … Using AI and ML, the IVS tracks authenticity metrics and reputational integrity to separate synthetic and fraudulent identities in real time”, ¶241, Larson, ¶226, “The output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected, and so forth)”, ¶253, “comparisons of data that should be associated with other data elements (good if they are, bad if they are not)”, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies that might indicate the use of stolen identity data”, ¶237),
wherein the at least one processor is configured to obtain a second plurality of comparison results respectively corresponding to the plurality of items (¶59, “the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated. The feature vector associated with the target order data may also be generated based on the value of the target order data and data representing the comparison result”, multiple orders and attributes, Larson, ¶238, “IVS compares the image of the user to the facial biometrics captured in the first step”, ¶242, “IVS cross-references that information with various identity databases and systems”, ¶243, “shared intelligence from over 40,000 websites and apps across industries and geographies”, ¶248, “ IVS incorporates multiple identity and fraud database searches and assessments”, ¶253, “comparisons of data that should be associated with other data elements (good if they are, bad if they are not)”, ¶237, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies that might indicate the use of stolen identity data”, ¶254, “Thousands of attributes are reviewed and aggregated”), and
wherein the at least one processor is configured to obtain output from the learning model based on the second plurality of comparison results (D1, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model”, Larson, ¶218, “NN 6600 suitable for use by the IVS and/or related services”, ¶219, “NN 6600 may represent one or more ML models that are trained using training data. The term “machine learning” or “ML” refers to the use of computer systems implementing algorithms and/or statistical models … to make predictions, inferences, or decisions”, ¶221, “ output layer 6616 outputs the determinations or assessments (yi)”, ¶226, “output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected, and so forth)”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 6,
D1 teach the fraud estimation system according to claim 1,
wherein, in the second service, fraudulence is estimated based on user information of a predetermined item (D1, ¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model”, Larson, ¶241, “subscribing enterprises and their customers”, ¶243, “IVS is also powered by shared intelligence from over 40,000 websites and apps across industries“, “to separate synthetic and fraudulent identities in real time”, ¶226, “ whether fraudulent activity is detected”, ¶233, “facial biometrics … biometric signature”, ¶234, “hand (palm) biometrics”, ¶236, “voice biometrics”, ¶238, “identity document and biographical data authentication”),
wherein the learning model has learned a relationship between a comparison result of user information of the predetermined item and the presence or absence of fraudulence in the first service (D1, ¶59, “the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated. The feature vector associated with the target order data may also be generated based on the value of the target order data and data representing the comparison result”, ¶45, “the model generation device 22 is, for example, a computer configured to execute learning of a machine learning model using learning data”, Larson, ¶71, ¶219, “ ML models that are trained using training data”, ¶220, “ML algorithms build or develop ML models“), and
wherein the at least one processor is configured to obtain a comparison result of the predetermined item (D1, ¶59, “data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, Larson, ¶108, “SBIDS 2B93 generates and sends a confidence score to the SBSP 2B92”, ¶109, “after a confidence score is calculated for each collected secondary biometric data/model. At operation 2B17, the SBSP 2B92 provides matched member and enrollment IDs back to the web service 2B92, and at operation 2B18, the web service determines a highest matching member/enrollment ID that meets a threshold”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 7,
D1 teach the fraud estimation system according to claim 1,
wherein, in the second service, fraudulence is estimated based on user information of a first item (D1, ¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model”, Larson, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies“, ¶243, “Using AI and ML, the IVS tracks authenticity metrics and reputational integrity to separate synthetic and fraudulent identities in real time “,),
wherein the learning model has learned a relationship between a comparison result of user information of a second item and the presence or absence of fraudulence in the first service (¶59, “the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated. The feature vector associated with the target order data may also be generated based on the value of the target order data and data representing the comparison result”, ¶45, “the model generation device 22 is, for example, a computer configured to execute learning of a machine learning model using learning data”, Larson, ¶226, “The output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected”, ¶228, “ ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors, and/or perform various identity verification tasks“, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies“), and
wherein the at least one processor is configured to obtain a comparison result of the second item (D1, ¶59, “the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, Larson, ¶108, “SBIDS 2B93 generates and sends a confidence score to the SBSP 2B92”, ¶109, “the web service determines a highest matching member/enrollment ID that meets a threshold”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 8,
D1 teach the fraud estimation system according to claim 1,
wherein, in the second service, user information of the target user in the first service and user information of a fraudulent user or an authentic user in the second service are compared (D1, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model, the score value associated with the feature vector, that is, the score value associated with the target order data. For example, a larger score value may be determined for an order that has a higher possibility of being a fraudulent order”, Larson, ¶103, “An authentication occurs when the IVS 140 determines, based on the scanned biometric data, that the user is attempting to verify/authenticate their identity for accessing services provided by an SPP 120 (e.g., a financial institution, etc.)”, ¶112, “if a user (as an enrollee or active user) attempts the authentication/verification process and presents a fake identity and the IVS 140 our system confirms their true identity as being different than the fake identity … return the name of the authenticated identity”), and
wherein the at least one processor is configured to obtain a result of the comparison from the second service (D1, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, Larson, ¶103, “An authentication occurs when the IVS 140 determines, based on the scanned biometric data, that the user is attempting to verify/authenticate their identity for accessing services provided by an SPP 120 (e.g., a financial institution, etc.)”, ¶112, “if a user (as an enrollee or active user) attempts the authentication/verification process and presents a fake identity and the IVS 140 our system confirms their true identity as being different than the fake identity … return the name of the authenticated identity”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 9,
D1 teach the fraud estimation system according to claim 1, wherein the at least one processor is configured to receive a utilization request that is a request for use of the first service by the target user (D1, ¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model”, order request is utilization request and the order data is request to use the service, Larson, ¶103, “An authentication occurs when the IVS 140 determines, based on the scanned biometric data, that the user is attempting to verify/authenticate their identity for accessing services provided by an SPP 120 (e.g., a financial institution, etc.)”, ¶¶137-138, “a third party platform employee may request to verify a user's identity for completing a money transfer”), and
wherein the at least one processor is configured to estimate fraudulence of the target user when the first service is used by the target user (D1, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model, the score value associated with the feature vector, that is, the score value associated with the target order data. For example, a larger score value may be determined for an order that has a higher possibility of being a fraudulent order”, Larson, ¶112, “if a user (as an enrollee or active user) attempts the authentication/verification process and presents a fake identity and the IVS 140 our system confirms their true identity as being different than the fake identity … return the name of the authenticated identity”, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies that might indicate the use of stolen identity data). The same motivation to combine for claim 1 equally applies for current claim.
As to claims 10 and 11;
Claims 10 and 11 are similar in scope to claim 1; therefore they are rejected under similar rationale.
As to claim 12,
D1 teach the fraud estimation system according to claim 1, wherein the processor is not configured to receive user information from the first service (Fig. 10A, ¶¶128-129, the processor is processor 12A (fraud determiner) and data are received by receiver 40 not the processor, Larson, ¶103, “text message may include a link 27B13, which when selected by the user by performing a tap gesture 27B20 on the link 27B13, may cause the application 110 to be executed to authenticate the user's identity”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 13,
D1 teach the fraud estimation system according to claim 1, wherein the processor is not configured to receive user information from the second service (¶59, “feature extraction device 14 generates, based on, for example, the target order data and past order data stored in the feature extraction device”, Larson, ¶201, “communication circuitry 6409 also includes TRx 6412 to enable communication with wireless networks”, ¶202, “Network interface circuitry/controller (NIC) 6416 may be included to provide wired communication to the network …”, ¶203, “ external interface 6418 (also referred to as “I/O interface circuitry” or the like) is configured to connect or coupled the system 6400 with external devices or subsystems”, ¶242, “the biographical information is collected from the Applicant, the IVS cross-references that information with various identity databases and systems”, ¶219, “After training, an ML model may be used to make predictions on new datasets”, ¶226, “The output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected”, ¶228, “ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors”, processor operates on outputs not raw user information). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 14,
D1 teach the fraud estimation system according to claim 1, wherein the comparison result indicates whether a match of information has occurred or a match of information has not occurred, when comparing user information of the target user and user information of the fraudulent user or authentic user (D1, Fig. 4, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated”, comparison result between attributes match/non-match (i.e. affirmative or negative), Larson, ¶¶235-236, ¶226, “The output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected,”, ¶228, “The ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors”, ¶238, “IVS compares the image of the user to the facial biometrics captured in the first step”, ¶242, “IVS cross-references that information with various identity databases and systems”, ¶253, “comparisons of data that should be associated with other data elements (good if they are, bad if they are not)”, ¶237, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies that might indicate the use of stolen identity data”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 15,
D1 teach the fraud estimation system according to claim 1, wherein the learning model is trained with teacher data comprising:
utilization situation data including a transaction value and a transaction frequency value (D1, Fig. 4, ¶55, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶80, “when it is identified that orders from 100 or more different IP addresses have been generated from the same user within one hour, result data, in which 1 is set as a value, associated with the target order data may be generated”, Larson, ¶256, “Evaluating user and device interactions against historical interactions and known bad behaviors creates another valuable identity metric. Variables include frequency and timing of transactions; average time between events, velocity and frequency”, ¶219, “ ML models that are trained using training data”);
comparison results data including information indicating whether an Internet protocol address and device identification are blacklisted in each of a plurality of services (D1, ¶¶79-80, “it is first determined whether or not the value (e.g., user ID) of an attribute of the target order data is included in a white list or a blacklist stored in the fraudulent order determination device 12 (list determination)”, Larson, ¶42, “comparing biographic and/or user agent data against a list of known fraudsters listed in one or more blacklists”, ¶101, “when the applicant is declined, the applicant's biographic data may be added to a black list maintained by the SPP 120, which may be used to immediately deny content/services”, ¶248, “an Applicant's computing device 105 is assessed to verify it is associated with the Applicant and not a device known to be associated with fraudulent activities”, ¶245, “IVS detects the use of VPNs and captures WiFi, cellular, and/or GPS details which are compared to IP address information”, ¶247, “global threat information such as known fraudsters and botnet participation”); and
a fraudulence flag value which indicates whether the user is fraudulent (D1, Fig. 4, ¶63, “1 is set as the result data value for an order determined to be a fraudulent order, and 0 is set as the result data value for an order determined not to be a fraudulent order”, Larson, ¶226, “The output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected,”, ¶228, “The ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 16,
D1 teach the fraud estimation system according to claim 1, wherein the processor is configured to store the learning model (D1, ¶47, “model storage device 24 is, for example, a computer configured to store a learned model generated by the model generation device 22”, ¶104, “When the score value determination device 16 detects that a new learned model is stored in the model storage device 24, the score value determination device 16 acquires the new learned model from the model storage device 24”, ¶150, Larson, ¶213, ¶219) that has learned the relationship between comparison results that are the result of comparing the first user information of the user in the first service, which provides a first service to the user, to second user information of the fraudulent user or the authentic user in the second service (D1, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, Larson, ¶242, “IVS cross-references that information with various identity databases and systems”, ¶243, ¶246, “Transactions are compared against the trusted digital identity of the real user to identify anomalies that might indicate the use of stolen identity data”), which provides a second service to the user, and determining the presence or absence of fraudulence in the first service (D1, ¶¶62-64, “learned model to which the feature vector associated with target order data is input has learned the learning data shown in FIG. 4. As shown in FIG. 4, the learning data includes, for example, an order ID, a feature vector, and result data”, ¶61, “score value determination device 16 determines, based on an output produced when the feature vector received from the feature extraction device 14 is input to the learned model, the score value associated with the feature vector, that is, the score value associated with the target order data. For example, a larger score value may be determined for an order that has a higher possibility of being a fraudulent order”, ¶67, “fraudulent order determination device 12 generates the estimation result data associated with the target order data shown in FIG. 6 based on the received score value and the evaluation data generated by the evaluation data generation device”, ¶61, ¶¶78-81, fraud/not fraud decision by the system (result data), Larson, ¶226, “The output variables (yi) 6604 may include a determined response (e.g., whether an image or audio data is spoofed or spliced, whether fraudulent activity is detected”, ¶228, “ ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors, and/or perform various identity verification tasks“). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 17,
D1 teach the fraud estimation system according to claim 1,
wherein the first service is a first electronic settlement service (Fig. 2, 10, ¶39, “computer system configured to process requests for ordering, shipping, payment, and the like of products and services from users”, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, Larson, ¶112, “where a user is attempting to verify their identity for a financial transaction, the IVS 140 may tie a name on the user's credit card to the name/dentity being authenticated”, ¶241, “IVS provides improved customer satisfaction—Fast, easy and secure accessing of the user's account information (e.g., financial, telecom accounts, etc.)”, ¶228, “ML models are then used by the component 113 and/or IVS 140 to detect malicious/fraudulent behaviors”, ¶270, “IVS supports transaction confirmation, where data, such as the payee and amount of a payment are signed”);
wherein the second service is either a second financial service, a second electronic transaction service, a second insurance service, a second communication service, a second home delivery service, or a second video streaming service (Fig. 2, 10, ¶39, “computer system configured to process requests for ordering, shipping, payment, and the like of products and services from users”, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, Larson, ¶241, “IVS provides improved customer satisfaction—Fast, easy and secure accessing of the user's account information (e.g., financial, telecom accounts, etc.)”, ¶243, “IVS is also powered by shared intelligence from over 40,000 websites and apps across industries”, “DIN collects and processes global shared intelligence from millions of daily consumer interactions including logins, payments, and new account applications”, ¶112, “where a user is attempting to verify their identity for a financial transaction, the IVS 140 may tie a name on the user's credit card to the name/dentity being authenticated”); and
wherein the comparison result includes information indicating whether the target user is on a blacklist of the first service (¶¶79-80, “it is first determined whether or not the value (e.g., user ID) of an attribute of the target order data is included in a white list or a blacklist stored in the fraudulent order determination device 12 (list determination)”, Larson, ¶42, “comparing biographic and/or user agent data against a list of known fraudsters listed in one or more blacklists”, ¶101, “when the applicant is declined, the applicant's biographic data may be added to a black list maintained by the SPP 120, which may be used to immediately deny content/services”, ¶112, “where a user is attempting to verify their identity for a financial transaction, the IVS 140 may tie a name on the user's credit card to the name/dentity being authenticated”). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 18,
D1 teach the fraud estimation system according to claim 1, wherein the processor is configured to: request a comparison processing from the first service and the second service (¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, Larson, ¶103, “ IVS 140 determines, based on the scanned biometric data, that the user is an existing member of the IVS 140 (or has already had their identity verified by the IVS 140)”, ¶104, “the client application 110 sends primary biometric data and secondary biometric data to a web service 2B91”, “web service 2B91 sends the primary biometric data (e.g., face image collected by the client application 110) to a primary biometric service provider 2B94 (e.g., a FaceProvider) with a command/instruction to identify potential matches”, ¶108, “SBSP 2B92 calls a secondary biometric identity detection service (SBIDS) 2B93 to compare the collected secondary biometric data/model “); receive the first binary comparison result from the first service and the second service (Fig. 4, ¶63, “1 is set as the result data value for an order determined to be a fraudulent order, and 0 is set as the result data value for an order determined not to be a fraudulent order”, ¶79, “result data, in which 0 is set as a value, associated with the target order data is generated”, ¶80, “result data, in which 1 is set as a value, associated with the target order data may be generated”, Larson, “SBIDS 2B93 generates and sends a confidence score to the SBSP 2B92”, ¶109, “web service determines a highest matching member/enrollment ID that meets a threshold”, ¶112). The same motivation to combine for claim 1 equally applies for current claim.
As to claim 19,
D1 teach the fraud estimation system according to claim 15, wherein the processor is configured to:
obtain the utilization situation data of the target user by referring to a utilization situation database (D1, ¶54, “order data like that shown in FIG. 3 is transmitted from the electronic commerce system 10 to the fraudulent order determination device 12”, ¶¶55-56, “order data includes an order ID, a user ID, IP address data, delivery destination data, a credit card number, a product ID, price data, quantity data, and the like”, ¶58, “order determination device 12 transmits the target order data to the feature extraction device 14”, data related to current transaction, Larson, ¶54, identity verification service provided by the IVS 140 may include lie (or truthfulness) detection services, which are used to evaluate the truthfulness of the person … changes in behavior”); and
train the leaning model (D1, ¶¶62-64, ¶¶98-100, “earning data associated with the target order data is generated based on the feature vector associated with the target order data and the result management data associated with the target order data. For example, learning data including the order ID, the feature vector associated with the order ID, and result management data associated with the order ID may be generated”, Larson, ¶54, “Analysis of the image/video data and the voice data discussed previously for micro-expressions may be accomplished using any suitable AI, machine-learning, and/or deep learning techniques, such as any of those discussed herein and/or variants or combinations thereof”) with the utilization situation data (Fig. 3 , comparison results data (D1, ¶59, “Data representing a comparison result between a value of a predetermined attribute extracted from the target order data and the value of that attribute in the past order data stored in the feature extraction device 14 may be generated. In this case, from among the past order data stored in the feature extraction device 14, order data having the same user ID as that of the target order data may be identified. Then, data representing the comparison result between the value of the predetermined attribute extracted from the target order data and the value of the attribute in the identified order data may be generated”, and the fraudulence flag (¶63, “1 is set as the result data value for an order determined to be a fraudulent order, and 0 is set as the result data value for an order determined not to be a fraudulent order”, Larson, ¶54, “IVS 140 may include lie (or truthfulness) detection services, which are used to evaluate the truthfulness of the person during the live interview. Data of existing and/or publicly available videos and audio samples that depict or are otherwise representative of untruthfulness or deception are cross-referenced with collated video data of both failed and successful enrollment attempts on the secure enrollment platform (e.g., IVS 140) to build algorithms on key attributes of deceptiveness, for example, body movements, eye misdirection, voice alterations, and changes in behavior”). The same motivation to combine for claim 15 equally applies for current claim.
Response to Arguments
Examiner respectfully withdraw the 35 USC 101 based on the provided amendments.
Applicant’s arguments, see P.11-13, filed 12/17/2025, with respect to the rejection(s) of claim(s) 1 and 4-20 under D1 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Larson that disclose the ability of using different services provided by different service providers to detect fraud transactions See at least ¶242, ¶¶4-5, “Businesses or government agencies may verify the identity of the real person using identity information … or they may verify identity information against authoritative sources (e.g., credit bureaus, government database(s), corporate database(s), etc.)”, ¶16, ¶18, “other information/data is used to detect fraudulent activity or otherwise determine a likelihood of fraudulent activity. For example, the geolocation and other location information may be compared against a list of location data of known fraudsters …”, ¶29, “some or all of the identity verification services may be provided by or accessed from third party systems/services, and in some of these embodiments, the information provided by the third party systems/services may be enhanced or amended using information collected by the IVS …”, ¶42, ¶55, ¶74, “ IVS servers 145 may provide interfaces that allow the client system 105B to access captured biometric and/or identity data, revise or comment on individual data items, and/or search various databases within or outside of the IVS 140 for various information/data about applicants/enrollees”, ¶92, ¶243, “IVS is also powered by shared intelligence from over 40,000 websites and apps across industries and geographies to recognize the one unique digital identity associated with every Applicant. Using AI and ML, the IVS tracks authenticity metrics and reputational integrity to separate synthetic and fraudulent identities in real time”, ¶262, “compiled directly from thousands of reliable and trusted sources. This includes all national consumer credit reporting agencies, online, utility …”, ¶248.
As to the remaining dependent claims, applicant argue that they are allowable due to their respective direct and indirect dependencies upon one of the aforementioned Independent claims. The examiner respectfully disagrees, Independent claims were not allowable as stated in the paragraph above in this “Response to Arguments” section in this office action.
Conclusion
The prior art made of record and not relied upon is considered pertinent to the applicant' s disclosure.
US Patent Application Publication No. 20190020759 filed by Kuang that disclose the ability to detect fraud based on different user characteristics as checking if the user is in a black list owned by third party as FBI frauds and scams database See at least ¶¶49
Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABOU EL SEOUD whose telephone number is (303)297-4285. The examiner can normally be reached Monday-Thursday 9:00am-6:00pm MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED ABOU EL SEOUD/Primary Examiner, Art Unit 2148