DETAILED ACTION
This is office action on the merits in response to the application filed on 12/29/2025.
Claims 1-20 have been filed by the applicant.
Claims 1-2, 8, 10-12 and 16-17 are currently amended.
Claims 1-20 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/2025 has been entered.
Response to Argument
Rejection under 101:
The applicant argues that the machine learning algorithm is not recited at high level because it is required to detect risk using “at least one clustering algorithm in conjunction with one or more additional unsupervised learning techniques”. The examiner respectfully disagrees. Because detecting risk embody the abstract idea, the claim itself is merely a recitation of the abstract idea and an instruction to “apply it” on a computer running machine learning algorithm. The applicant argues that using “at least one clustering algorithm in conjunction with one or more additional unsupervised learning techniques” provides improvement to system security. However, it is merely to implementing the abstract idea using machine learning technology, there claims do not recite improvement to the machine learning technology. It generally link the use of the judicial exception to a particular technological environment (MPEP § 2106.05(h)).
The applicant further argues that the claims provides technical improvement to improve system security by “encapsulating data packet of risk information and executing interaction based on data packet.” The examiner respectfully disagrees. The steps are merely gathering data and performing actions. The terms of “automated actions” “encapsulating”, “data packet” and “executing” is merely “apply it” as instruction on a computer. Such steps are consistent with abstract idea of “following instruction” which is under grouping of managing personal interactions of “certain method of organizing human activities”.
As stated above, the unsupervised learning technique and clustering algorithm are part of machine learning technology used to automate the abstract idea as recited. Therefore, it generally link the use of the judicial exception to a particular technological environment (MPEP § 2106.05(h)).
Therefore, 101 rejection is maintained.
Rejection under 103:
The applicant argues that the claims Jarosch does not discloses “encapsulating at least one data packet incorporating information related to…one or more conditions…”. The examiner respectfully disagrees. Jarosch discloses associating the transaction instrument with the secondary token such that there is a one-to-one relationship between the transaction instrument and secondary token [0027 0064], which is representing a condition. As the limitation being amended to change the scope of the claims, the examiner have revised the 103 rejections accordingly, see 103 rejections below.
The applicant further argues that the cited prior arts do not teach “executing one or more subsequent portions of the at least one interaction, the one or more subsequent portions of the at least one interaction being modified from a predetermined state in accordance with at least a portion of the information related to the one or more conditions incorporated in the at least one data packet”. The applicant further cited specification P8: 25-29 and P12: 14-18 to support the amendment. The examiner respectfully disagrees. The cited portion at most discloses a process to detect a risk condition of an interaction, flag/encapsulate risk data and proceed accordingly. There is specification fail to provide description of different portions of the interaction. Hernandez disclosed detecting risk, calculating and determining likelihood of fraud and performing various remedial actions accordingly, which would read on the above limitation. See 103 rejections below.
In addition, even if to consider initial and subsequent portion of interaction, “receiving data and detecting risk” would read on initial portion of interaction, and performing remedial actions would read on modified subsequent portion of interaction.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 1-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Independent claims 1, 11 and 16 are amended to recite “one or more initial portions of interaction” and “one or more subsequent portions of interaction”. The applicant further cited specification P8: 25-29 and P12: 14-18 to support the amendment. However, the cited portion at most discloses a process to detect a risk condition of an interaction, flag/encapsulate risk data and proceed accordingly. There is specification fail to provide description of different portions of the interaction. For the purpose of examination, the examiner interprets the limitations as obtaining data related to user interactions, determining risk and executing any action against risks (modified interaction).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
In the instant case, claims 1-10 are directed to a method, claims 11-15 are directed to a non-transitory computer-readable storage medium, and claims 16-20 are directed to an apparatus comprising a memory and a processor. Therefore, these claims fall within the four statutory categories of invention.
The limitations of independent claim 1, which is representative of independent claims 11 and 16, have been denoted with letters by the Examiner for easy reference. The judicial exceptions recited in claim 1 are identified in bold below:
obtaining data related to at least one user associated with one or more initial portions of at least one interaction;
detecting anomalous information pertaining to the at least one user in relation to one or more designated security risk-related parameters by processing at least a portion of the obtained data using one or more artificial intelligence techniques comprising at least one clustering algorithm in conjunction with one or more additional unsupervised learning techniques;
determining, based at least in part on at least a portion of the anomalous information, one or more security risks associated with the at least one user within a context of the at least one interaction; and
performing one or more automated actions based at least in part on the one or more determined security risks, wherein performing one or more automated actions comprises:
encapsulating at least one data packet incorporating information related to the one or more determined security risks, one or more conditions, the at least one user, and the at least one interaction; and
executing one or more subsequent portions of the at least one interaction, the one or more subsequent portions of the at least one interaction being modified from a predetermined state in accordance with at least a portion of the information related to the one or more conditions incorporated in the at least one data packet;
wherein the method is performed by at least one processing device comprising a processor coupled to a memory.
Limitations A through C under the broadest reasonable interpretation covers steps or functions that can be reasonably performed in the human mind. Other than reciting artificial intelligence technique in limitation B, nothing in the claim element differentiates the limitation from processes that a person can reasonably perform in the mind. The recitation of artificial intelligence technique generally links the use of the judicial exception to a particular technological environment (MPEP § 2106.05(h)). Therefore, limitations A through C recite an abstract idea, as highlighted above, that is consistent with the observation, evaluation, and judgment aspects of a mental process.
Furthermore, limitation D-F recites “performing action based on determined risks, encapsulating data, and executing interaction in accordance with data packet” which is following instructions, fits squarely within the “certain methods of organizing human activity” grouping of abstract ideas.
Accordingly, claim 1, and by analogy similar claims 11 and 16, recite at least two abstract ideas and the analysis proceed to Step 2A.2.
The judicial exception is not integrated into a practical application. In particular, claim 1 recites the additional elements in bold below:
obtaining data related to at least one user associated with one or more initial portions of at least one interaction;
detecting anomalous information pertaining to the at least one user by processing at least a portion of the obtained data using one or more artificial intelligence techniques;
determining, based at least in part on at least a portion of the identified information, one or more security risks associated with the at least one user within a context of the at least one interaction; and
performing one or more automated actions based at least in part on the one or more determined security risks, risks, wherein performing one or more automated actions comprises:
encapsulating at least one data packet incorporating information related to the one or more determined security risks, one or more conditions, the at least one user, and the at least one interaction; and
executing one or more subsequent portions of the at least one interaction, the one or more subsequent portions of the at least one interaction being modified from a predetermined state in accordance with at least a portion of the information related to the one or more conditions incorporated in the at least one data packet;
wherein the method is performed by at least one processing device comprising a processor coupled to a memory.
The additional element(s) in limitation B are generally link the use of the judicial exception to a particular technological environment (MPEP § 2106.05(h)). The elements in limitation E merely serving as a tool to perform the abstract idea (MPEP § 2106.05(f)). As such, when the additional elements are considered individually and as an ordered combination, the claim as a whole amounts to no more than or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, the additional element(s) do not integrate the abstract idea into a practical application because they do not recite any additional elements indicative of integration into a practical application. Rather, the claim as whole generally links the judicial exception to a technological environment defined by high level recitations of a computer and the Internet. Therefore, the claim is directed to an abstract idea and the analysis proceeds to Step 2B.
The additional elements, both individually and as an ordered combination, do not amount to significantly more than the judicial exception because the outcome of the considerations at Step 2B will be the same when the considerations from Step 2A.2 are reevaluated. As discussed under Step 2A.2, the additional element(s) amount to no more than generally link the abstract idea performed by a generic computer. This is not enough to provide an inventive concept. Therefore, claims 1, 11, and 16 are not patent eligible.
Dependent claims 2, 12 and 17 further recite annotating data to indicate user is associated with security risks. The limitation is further reciting the abstract idea of following rules. The claim does not recite additional elements that integrate the abstract idea to a practical application or amount significantly more than the abstract idea.
Dependent claims 3, 13 and 18 further recite processing data using natural language processing technique. The limitation of processing data is further reciting the abstract idea of mental process. The additional element of natural language processing technique generally link the use of the judicial exception to a particular technological environment (MPEP § 2106.05(h)).
Dependent claims 4, 14 and 19 further recite identifying data. The limitation is further reciting the abstract idea of mental process. The claim does not recite additional elements that integrate the abstract idea to a practical application or amount significantly more than the abstract idea.
Dependent claims 5 further recite determining one or more risk levels. The limitation of processing data is further reciting the abstract idea of mental process. The claim does not recite additional elements that integrate the abstract idea to a practical application or amount significantly more than the abstract idea.
Dependent claims 6, 15 and 20 further recite identifying one or more features of interactions. The limitation of processing data is further reciting the abstract idea of mental process. The claim does not recite additional elements that integrate the abstract idea to a practical application or amount significantly more than the abstract idea.
Dependent claims 7, 13 and 18 further determining one or more interactions involves a transaction value over an amount and carried out in accordance with one or more geographic conditions. The limitation of processing data is further reciting the abstract idea of mental process. The claim does not recite additional elements that integrate the abstract idea to a practical application or amount significantly more than the abstract idea.
Dependent claims 8 further generating and outputting notification. The limitation of processing data is further reciting the abstract idea of following instructions. The claim does not recite additional elements that integrate the abstract idea to a practical application or amount significantly more than the abstract idea.
Dependent claims 9 further training one or more artificial intelligence technique. The limitation generally links the use of the judicial exception to a particular technological environment (MPEP § 2106.05(h)).
Dependent claims 10 further recite interfacing one or more systems. The limitation of interfacing is further reciting the abstract idea of following instructions. The additional element of one or more system merely serving as a tool to perform the abstract idea (MPEP § 2106.05(f)).
In summary, the dependent claims considered both individually and as an ordered combination do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. The claims do not recite an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment. Therefore, the claims are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez (US 20210073819 A1), and further in view of Filliben (US 20190377819 A1) and Jarosch et al. (US 20250156858 A1).
With respect to claim 1, 11 and 16:
Hernandez teaches:
obtaining data related to at least one user associated with one or more initial portions of at least one interaction. (At step 303, the process 300 includes receiving one or more requests. The request can include an identifier and credentials, such as a username, password, or public key. Based on the identifier, the system can identify a particular user account and the system can retrieve account data 207 used to authenticate the credentials. The request can include selections and other data for configuring various triggers, thresholds, and other aspects of fraud monitoring and detection. [0094])
detecting anomalous information pertaining to the at least one user in relation to one or more designated security risk related parameters by processing at least a portion of the obtained data using one or more artificial intelligence techniques. (The monitoring system 200 can provide the monitoring data and the first, second, and third determinations to a trained machine learning model for predicting fraud likelihood. In some embodiments, data can automatically be retrieved based on the request. For example, in response to a request to configure fraud detection services for a plurality of user accounts of a bill pay system and an account opening system, the monitoring system 200 can automatically retrieve historical data associated with each of the plurality of users. At step 306, the process 300 includes configuring parameters, such as, for example, triggers and threshold for controlling fraud analysis and prediction processes. [0065 0095-0096])
comprising at least one […] algorithm in conjunction with one or more additional unsupervised learning techniques. (The machine learning model can be configured to apply one or more learning techniques including, but not limited to, supervised learning, unsupervised learning. [0115])
determining, based at least in part on at least a portion of the anomalous information, one or more security risks associated with the at least one user within a context of the at least one interaction. (By the process 400, various analytical outputs can be generated including, but not limited to, fraud likelihood scores, determinations of anomalous activity, and identifications of particular fraud behaviors. [0100-0102])
performing one or more automated actions based at least in part on the one or more determined security risks, wherein performing one or more automated actions comprises […]. (By the process 400, various analytical outputs can be generated including, but not limited to, fraud likelihood scores, determinations of anomalous activity, and identifications of particular fraud behaviors. [0100-0102])
[…] information related to the one or more determined security risks, one or more conditions, the at least one user, and the at least one interaction. (In this example, the system determines that deposits and transactions from the customer account typically occur via an e-banking system. The system can determine that the transaction amount exceeds historical deposits the other accounts with which the particular account is associated. Based on the various determinations of atypical activity, the system can compute a likelihood of fraud. The system can compare the likelihood of fraud to one or more predetermined thresholds. [0008])
executing one or more subsequent portions of the at least one interaction, the one or more subsequent portions of the at least one interaction being modified from a predetermined state in accordance with at least a portion of the information related to the one or more conditions incorporated in the at least one data packet. (In response to the likelihood of fraud meeting the predetermined threshold, the system can perform actions including, but limited to, generating and transmitting an alert, identifying a particular teller that processed the transaction via the teller system, suspending and/or halting transactional services to the particular account, and transmitting a notification to a computing device with which the administrator account is associated. [Abstract 0008])
wherein the method is performed by at least one processing device comprising a processor coupled to a memory. (An exemplary system for implementing various aspects of the described operations, which is not illustrated, includes a computing device including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. [0137])
Hernandez does not explicitly teach clustering algorithm. However,
Filliben teaches clustering algorithm. (The ensemble may determine an output using one or more machine learning models—e.g., decision trees, support vector machines, neural network, Boltzmann machine, restricted Boltzmann machine, autoencoder, clustering algorithms (knn, shared nearest neighbors, DBSCAN, K means, and others). [0072])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system as disclosed by Hernandez to use clustering algorithm with the technique as disclosed by Filliben to process large dataset with efficiency as Filliben suggests [0003]
Hernandez in view of Filliben does not teach the following limitations. However,
Jarosch teaches (in italic):
encapsulating at least one data packet incorporating information related to the [one or more determined security risks, one or more conditions, the at least one user, and the at least one interaction]. (the primary transaction processor may wish to encapsulate the transaction instrument (for example, the credit card number or other information) by generating a secondary token, and associating the transaction instrument with the secondary token such that there is a one-to-one relationship between the transaction instrument and secondary token. [0027 0064])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system as disclosed by Hernandez in view of Filliben to encapsulating data for system to process with the technique as disclosed by Jarosch to reduce risk during data communication as Jarosch suggests [0064]
Claim 11, a CRM with the same scope as claim 1, is rejected.
Claim 16, an apparatus with the same scope as claim 1, is rejected.
With respect to claim 2, 12 and 17:
Hernandez further teaches wherein performing one or more automated actions comprises automatically annotating data associated with the one or more subsequent portions of the at least one interaction to indicate that the at least one user is associated with at least one of the one or more determined security risks. (For example, the application database service 215 can flag databases, users, and/or application activities that are determined to be outside of expected usage, potentially fraudulent, and/or in violation of one or more policies. [0075])
Claim 12, a CRM with the same scope as claim 2, is rejected.
Claim 17, an apparatus with the same scope as claim 2, is rejected.
With respect to claim 3, 13 and 18:
Hernandez further teaches wherein identifying information pertaining to the at least one user comprises processing, using one or more natural language processing techniques, data related to one or more of user name information, user address information, user billing information, and user contact information. (The request can include an identifier and credentials, such as a username, password, or public key. Based on the identifier, the system can identify a particular user account and the system can retrieve account data 207 used to authenticate the credentials. The request can include metadata, such as, for example, an IP address, MAC address, and location data. The process 300 can include performing one or more data analysis processes 400 using the received data. [0094-0100])
Claim 13, a CRM with the same scope as claim 3, is rejected.
Claim 18, an apparatus with the same scope as claim 3, is rejected.
With respect to claim 4, 14 and 19:
Hernandez further teaches wherein identifying information pertaining to the at least one user comprises identifying at least one of one or more geography- related parameters associated with the at least one user, one or more enterprise ownership parameters associated with the at least one user, and one or more alphanumeric identifiers associated with the at least one user. (In one example, monitoring data 209 comprises transactional and location data (e.g., comprising one or more geographic positions) associated with a particular user account. Transactional data can include, for example, user identifiers, banking information, such as transaction amounts, timestamps, credentials, and networking information, such as IP addresses and configuration data. The transactional data can include information associated with a computing device 206 with which transactional activity is associated, such as, for example, MAC address, phone number, phone provider, device type, and other data. The request can include an identifier and credentials, such as a username, password, or public key. The request can include metadata, such as, for example, an IP address, MAC address, and location data. [0070 0094])
Claim 14, a CRM with the same scope as claim 4, is rejected.
Claim 19, an apparatus with the same scope as claim 4, is rejected.
With respect to claim 5:
Hernandez further teaches wherein determining one or more security risks associated with the at least one user within a context of the at least one interaction comprises determining one or more risk levels associated with each of the one or more geography- related parameters associated with the at least one user, the one or more enterprise ownership parameters associated with the at least one user, and the one or more alphanumeric identifiers associated with the at least one user. (The system can compare the likelihood of fraud to one or more predetermined thresholds. In response to the likelihood of fraud meeting the predetermined threshold. In one example, monitoring data 209 comprises transactional and location data (e.g., comprising one or more geographic positions) associated with a particular user account. Transactional data can include, for example, user identifiers, banking information, such as transaction amounts, timestamps, credentials, and networking information, such as IP addresses and configuration data. The transactional data can include information associated with a computing device 206 with which transactional activity is associated, such as, for example, MAC address, phone number, phone provider, device type, and other data. The request can include an identifier and credentials, such as a username, password, or public key. The request can include metadata, such as, for example, an IP address, MAC address, and location data. [0008 0070 0094])
With respect to claim 6, 15 and 20:
Hernandez further teaches wherein determining one or more security risks associated with the at least one user within a context of the at least one interaction comprises identifying one or more features of the at least one interaction associated with security risk assessment. (The configuration data 211 can include triggers and thresholds for assessing outputs of various monitoring process. In this example, the trigger can include an expected location (e.g., based on a historical pattern of login activities), time, and IP address with which logins for a particular user account are associated. [0072])
Claim 15, a CRM with the same scope as claim 6, is rejected.
Claim 20, an apparatus with the same scope as claim 6, is rejected.
With respect to claim 7:
Hernandez further teaches wherein identifying one or more features of the at least one interaction associated with security risk assessment comprises at least one of determining that the at least one interaction involves a transaction valued over a given amount, and that the at least one interaction is to be carried out in accordance with one or more predetermined geographic conditions. (As an example, the pattern tool 227 can determine an average and median value or frequency of ATM withdrawals by a particular user, in a particular region, or in general. If the particular user has purchases outside of a geofence around the home address of the particular user for more than fifteen days, the system can trigger a remedial action. As an example, the pattern tool 227 can determine that a 98% confidence window of an amount of a teller-based withdrawal for a particular user is between $0 and $500, such that an attempted withdrawal of over $500 can trigger a remedial action or contribute to an overall decision to trigger a remedial action when combined with similar potential fraud indicators from other analysis. [0086])
With respect to claim 8:
Hernandez further teaches wherein performing one or more automated actions comprises automatically generating and outputting, to one or more additional users associated with the one or more subsequent portions of the at least one interaction, at least one notification that the at least one user is associated with at least one of the one or more determined security risks. (As one example, the alert service 221 may generate and transmit alerts to a particular user of an external system 203 based on a determination of potentially fraudulent activity. [0080])
With respect to claim 9:
Hernandez further teaches wherein performing one or more automated actions comprises automatically training at least a portion of the one or more artificial intelligence techniques using feedback related to at least one of the one or more determined security risks and at least a portion of the anomalous information. (Training the machine learning model can include generating a plurality of parameters and weight values, each weight value for determining a contribution level that a corresponding parameter provides to an output of the machine learning model. Non-limiting examples of parameters include, but are not limited to, metrics of pattern similarity between monitoring data 209 and historical data, a count of instances in which an impossible activity occurred, a count of login failure instances, a count of credential change requests, a mapping of network addresses from which various requests and/or inputs were received, employment data-based metrics (e.g., such as operating hours, locations, actions, and behaviors), and user data-based metrics (e.g., such as hours of access, typical transfer amounts, and other activity records or patterns). [0115])
With respect to claim 10:
Hernandez further teaches wherein obtaining data related to the at least one user associated with the one or more initial portions of the at least one interaction comprises interfacing with one or more systems utilized in connection with executing at least a portion of the one or more initial portions of the at least one interaction. (At step 309, the process 300 includes receiving data from one or more external systems 203. The data can be received substantially continuously and can be stored as monitoring data 209. In one example, the data includes transactional data, such as a time-series record of transaction amounts, locations, and methods by which transactions were requested [0098])
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20230132635 A1: requests to perform activity with respect to a customer account can be monitored to attempt to detect fraudulent activity due to compromised customer credentials or other unauthorized access. The unauthorized party can request actions such as to create a new account, mount a snapshot of customer data, and exfiltrate the customer data. Various embodiments monitor such requests and permissions granted to accounts not directly owned by a customer, and can apply automatic mitigations for suspicious activity in order to reduce the risk of exposing data to unauthorized accounts. Such an offering determines mitigations to perform, such as to block, alert, rate limit, or terminate the linked or non-linked account based on account reputation. The detection mechanism can use various heuristics to make mitigation decisions, as may consider factors such as account age, geolocation, access history, device fingerprint, network domain, payment type, prior suspicious activity, and the like.
US 11288672 B2: A machine learning engine for fraud detection following link selection may be trained using artificial intelligence techniques and used according to techniques discussed herein. A buyer account may be used to establish and generate a digital gift card having a particular value specified by the buyer. The digital gift card may then be conveyed to another account, such as an email address. The digital gift card may be provided with an online electronic process for redemption and use of the value, for example, by selecting a link and navigating to the process. When the claimer account attempts to utilize the value of the gift card by navigating to the process or otherwise engaging in the electronic process through a device, a risk and fraud analysis engine may execute to determine, based on real-time data of the claimer account, the buyer account, and/or device, whether the digital gift card was generated fraudulently or is being used fraudulently.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZESHENG XIAO whose telephone number is (571)272-6627. The examiner can normally be reached 10:00am-4:30pm M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick McAtee can be reached on (571) 272-7575. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Z.X./Examiner, Art Unit 3698
/PATRICK MCATEE/Supervisory Patent Examiner, Art Unit 3698