Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in replied to the request for continued examination filed on 01/22/2026.
Claims 1, 2, 10-12, 15, 17, and 18 have been amended.
Claims 4, 6, 7, 9 and 16 have been canceled.
Claims 21-25 have been added.
Claims 1-3, 5, 8-15, and 17-25 have been examined and are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/22/2026 has been entered.
Remarks
With regard to the 101 rejection, the arguments have been considered but they are not persuasive. The applicant argued in page 14 that “the claims mirror those of eligible claims 2 and 3 in Example 35 . . . recite a specific, non-conventional code matching mechanism that enables verification of a ban customer’s identity . . . ”. The claims in Example 35 performed both the decrypting and comparing functions between generated codes, and thus, verifying the transactions. The claims, in the instant case, are directed to generated a set of vectors which include historical transactions and classified them in potential fraud characteristics. The claims further classified them based on the probability and classified those probabilities in risk-prone transactions. Hence, the amended limitation is directed to an abstract idea of detecting fraudulent transaction rather than showing a technical improvement.
In Step 2A – Prong Two, the use of a machine learning model to generate a probability score is not indicative of integration into practical application. The limitations are adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).
Similarly, in Step 2B Prong Two, the Applicant asserted that the elements amounted to significantly more than an abstract idea by citing the combination of the amended limitations using machine learning to detect a probability of fraudulent transactions. However, The limitations are adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Hence, the claim does not amount to “significantly more” nor the limitations are indicative of an inventive concept.
Given the above reasoning, the 101 rejection is maintained.
With regard to the 103 rejection, the applicant has sufficiently amended the claim. For instance, the Applicant amended claim 1 with “either (1) blocking, by the server computer system, the first computing system and the second computing system from performing the event over the computer network . . .”. Such feature is still disclosed by the previously cited reference Manapat. It would be obvious to one of ordinary skill in the art before effective filing date to combine the features of determining a potentially fraudulent transaction based on threshold generated by machine learning model as taught by Manapat with the invention of detecting transaction in real time as disclosed by Unser to better provide a security measure to real-time transaction (Abstract). Therefore, the combination is obvious.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5, 8-15, and 17-25 are directed to a system, method, or product which are one of the statutory categories of invention. (Step 1: YES).
Claims 1-3, 5, 8-15, and 17-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-20 are directed to an abstract idea, Method of Organizing Human Activity. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional computer elements, which are recited at a high level of generality, provide generic computer functions that do not add meaningful limits to practicing the abstract idea.
Claim 1, 11 and 17 are grouped together. Claim 1, for instance, recites, in part, A method for a server computer system to analyze an event, the method comprising: storing, by the server computer system during a first period of time, one or more batch features in a database, the one or more batch features being based on an aggregation of historical event data; after the first period of time, detecting, at the server computer system, a request to perform the event over a computer network between a first computing system and second computing system operated by a user; after detecting the request, determining, in real time, by the server computer system, one or more real time features corresponding to the event; retrieving from the database, by the server computer system, the one or more batch features that correspond to one or more attributes associated with the event; generating a third feature set representing a difference between a historical behavior of the user and a behavior of the user in real time based on the one or more real time features and the one or more batch features of the event, by the server computer system, a machine learning model to generate a probability score of the event based on inputting to the machine learning model, the one or more real time features, the one or more batch features, and the third feature set; determining, by the server computer system, whether the probability score of the event is higher than a first threshold or higher than a second threshold and lower than the first threshold; and either (1) blocking, by the server computer system, the first computing system and the second computing system from performing the event over the computer network when the probability score of the event is higher than the first threshold; or (2), transmitting, to the second computing device, a notification that causes the second computing device to request additional authentication from the user, when the probability score of the event is higher than the second threshold and lower than the first threshold. The claims are directed to business relations (commercial interactions). Hence, it falls within the Certain Method of Organizing Human Activity grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements such as non-transitory computer readable storage medium, a memory, a server computer system, one or more processors, a machine learning model recited at a high-level of generality (detecting, determining, performing) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea
Next the claim as a whole is analyzed to determine whether any element, or combination of elements, is sufficient to ensure the claim amounts to significantly more than an abstract idea. Claims 1, 11 and 17 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are merely performing the abstract idea on a generic device i.e., abstract idea and apply it. There is no improvement to computer technology or computer functionality MPEP 2106.05(a) nor a particular machine MPEP 2106.05(b) nor a particular transformation MPEP 2106.05(c). Given the above reasons claims 1, 11 and 17 do not recite an Inventive Concept. Thus, the claims are not patent eligible.
The dependent claims have been given the full two part analysis (Step 2A – 2-prong tests and step 2B) including analyzing the additional limitations both individually and in combination. The Dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because for the same reasoning as above and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional limitations of the dependent claim(s) when considered individually and as ordered combination do not amount to significantly more than the abstract idea.
The dependent claims 2, 12, 18 have been given the full two part analysis (Step 2A – 2-prong tests and step 2B) including analyzing the additional limitations both individually and in combination. The dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because the claims recite an abstract idea of performing grouping of batch features and attributes and such performing of abstract idea and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional elements (non-transitory computer readable media, machine learning model) of the dependent claim(s) when considered individually and as ordered combination do not amount to significantly more than the abstract idea because they are performing grouping of batch features and attributes and such performing of abstract idea on a generic device is not an improvement to technology. See MPEP 2106.05(f).
The dependent claims 3, 13, 19 have been given the full two part analysis (Step 2A – 2-prong tests and step 2B) including analyzing the additional limitations both individually and in combination. The dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because the claims recite an abstract idea of performing grouping of batch features and attributes and such performing of abstract idea and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional elements (non-transitory computer readable media, machine learning model) of the dependent claim(s) when considered individually and as ordered combination do not amount to significantly more than the abstract idea because they are performing grouping of batch features and attributes and such performing of abstract idea on a generic device is not an improvement to technology. See MPEP 2106.05(f).
The dependent claims 5, 14, 20 have been given the full two part analysis (Step 2A – 2-prong tests and step 2B) including analyzing the additional limitations both individually and in combination. The Dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because for the same reasoning as above and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional elements (computer readable storage media, server computer system, one or more processors) of the dependent claim(s) when considered individually and as ordered combination do not amount to significantly more than the abstract idea they are generating one or more combined features of an event in real time such performing of abstract idea on a generic device is not an improvement to technology. See MPEP 2106.05(f).
The dependent claims 8, 21, 22 have been given the full two part analysis (Step 2A – 2-prong tests and step 2B) including analyzing the additional limitations both individually and in combination. The Dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because the claims recite an abstract idea of determining a probability score based on inputting into a machine learning model and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional elements (server computer system, computing system, computer readable storage media,) of the dependent claim(s) when considered individually and as ordered combination do not amount to significantly more than the abstract idea in which they determine a probability score based on inputting into a model and such performing of abstract idea on a generic device is not an improvement to technology. See MPEP 2106.05(a).
The dependent claims 10, 15 and 23 have been given the full two part analysis (Step 2A – 2-prong tests and step 2B) including analyzing the additional limitations both individually and in combination. The Dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because the claims recite an abstract idea of requiring additional authentication if certain threshold is not met and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional elements (computer storage media, one or more processors, server computer system) of the dependent claim(s) when considered individually and as ordered combination do not amount to significantly more than the abstract idea in which they require additional authentication if certain threshold is not met and such performing of abstract idea on a generic device is not an improvement to technology. See MPEP 2106.05(a).
The dependent claims 24 and 25 have been given the full two part analysis (Step 2A – 2-prong tests and step 2B) including analyzing the additional limitations both individually and in combination. The Dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 because the claims recite an abstract idea of third feature set is a merchant specific feature that facilitates fraud determination and the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional elements (computer storage media, one or more processors, server computer system, non-transitory computer readable storage media) of the dependent claim(s) when considered individually and as ordered combination do not amount to significantly more than the abstract idea in which they require additional authentication if certain threshold is not met and such performing of abstract idea on a generic device is not an improvement to technology. See MPEP 2106.05(a).
Therefore, Claims 1-3, 5, 8-15, and 17-25 are not drawn to eligible subject matter as they are directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5, 8, 10-15, 17-25 are rejected under 35 U.S.C. 103 as being unpatentable over Unser (US 2015/0332414 A1) in view of Manapat et al. (US 10,867,303 B1).
Claims 1, 11, 17 are grouped together: Claim 1 is disclosed, for instance, Unser teaches: a method: for a server computer system to analyze an event, the method comprising: storing, by the server computer system during a first period of time, one or more batch features in a database (Unser, see at least par. [0043] “. . . these transactions, embodied as a plurality of transaction records, include certain information which may shed light on the class of goods and services in a particular transaction based on analysis and comparison to similar transactions stored in the transaction data 310”), the one or more batch features being based on an aggregation of historical event data (Unser, see at least par. [0046] “. . . For example, based on historical data associated with transactions corresponding to the automobile industry, even dollar amounts (e.g. $500, $1,000, $5,000, etc.) generally describe vehicle purchases, in contrast to other possible purchases associated with that category of merchant (e.g. repairs or vehicle component/accessory purchases which tend not to be rounded). Such characteristics or traits associated with the historical data for a given merchant category may be stored in a rules data base within the analytics engine . . .”) batch features such as category of merchant associated with a transaction;
after the first period of time, detecting, at the server computer system, a request to perform the event over the network between a first computing system and a second computing system operated by user of the computing system (Unser, see at least par. [022] “. . . As used herein, the term “transaction acquirer” can include, for example, a merchant, a merchant terminal, an automated teller machine (ATM), or any other suitable institution or device configured to initiate a financial transaction per the request of the customer or cardholder . . . computer system 110 via a network 130 which connects the computer system 120 of the transaction acquirer or merchant 122 with the managing computer system 110 of the payment card service provider 112. ”) Interpretation: second computer system corresponds to computer system associated with a merchant & first computer corresponds to a computing device of the computer system 110.
after detecting the request, determining, in real time, by the server computer system, one or more real time features corresponding to the event (Unser, par. [0021] “ For a given transaction record that omits direct itemization, analysis of the record is performed using the aforementioned factors and applied back into that particular payment card transaction data, in order to determine or forecast in real time what type or category of product was purchased in that particular transaction.” & see at least par. [0056] “The terminal ID field 7024 is also analyzed (block 820) and the transaction purchase price 7022 is compared with an average terminal purchase price associated with the particular terminal ID for the given merchant 7023, based on historical transactions data . . . Comparison yields data indicating whether the particular transaction falls within or outside the range of one or more average transaction amounts associated with the particular merchant terminal . . .”) Interpretation: determining data in real time corresponds to amount data compared to historical transaction;
retrieving from the database, by the server computer system, one or more batch features that correspond to one or more attributes associated with the event(Unser, see at least par. [0038] “Further statistical and variable analysis processing 370 is utilized in order to ascribe attributes to purchasers of a given transaction. Variables such as time, purchase frequency, purchasing geography and location, aggregate customer spending, and the like may be used to develop profiles for particular transaction events . . .”) Interpretation: attributes could correspond to purchase frequency, aggregate customer spending.
Unser does not disclose the following; however, Manapat teaches:
Generating a third feature set representing a difference between a historical behavior of the user in real time based on the one or more real time features and the one or more batch features of the event (par. [0021] the real time data and historical factors are combined to generate a data set & (Unser, see at least par. [0046] “. . . Statistical analysis of the data by the analysis engine of the managing computer system enables determination of sets of price thresholds corresponding to clusters of transaction purchase prices and allocated to a corresponding category or type of item for offered for sale by the merchant . . .”) third feature set corresponds to set of price thresholds);
invoking, by the server computer system, a machine learning model to generate a probability score of the event based on inputting to the machine learning model, a the one or more real time features, the one or more batch features, and the third feature set (Manapat, Col. 3 ln 12-16 “receiving a plurality of attempted purchase transactions for the user at the system; analyzing each of the plurality of attempted purchase transactions to generate a score indicative of the attempted transaction's risk level; re” & ln 48-55 “ The fraud risk assessment and actioning system uses machine learning to assess the risk of each attempted transaction and automatically blocks those transactions predicted to have an excessive risk of fraud, by comparing a generated fraud likelihood score (also referred to herein as a ‘fraud score’)—a numerical estimate of the probability that an attempted transaction is fraudulent—for the transaction to a permissible threshold) Interpretation: the system generates a risk score that is a probability of attempted transaction;
determining, by the server computer system, whether the probability score of the event is higher than a first threshold or higher than a second threshold and lower than the first threshold (Col. 4, “. . . In embodiments, the fraud likelihood score may be on a set scale (e.g., 0.0 to 1.0), where higher scores indicate a prediction that the attempted charge has a higher likelihood of being fraudulent. In other embodiments, the scale may instead be inverted, such that lower scores indicate a prediction that the attempted charge has a higher likelihood of being fraudulent. The scale may be divided into different tiers (e.g., 3 tiers) that correspond to, e.g., low, elevated, and high risk. For example, in an embodiment where higher scores indicate a prediction that the attempted charge has a higher likelihood of being fraudulent, scores between 0.00 and 0.33 (inclusive) may be considered to be low risk, scores between 0.34 and 0.67 (inclusive) may be considered to be elevated risk, and scores at or above 0.68 may be considered to be high risk . . .”) The thresholds are divided by risk scores. One score could be in tier 2, that is higher than the first tier but lower than the next tier;
and (1) either, blocking, by the server computer system, the first computing system and the second computing system from performing the event over the computer network when the probability score of the event is higher than the first threshold (Col. 3 ln 29-30 “ Charges estimated to be a higher risk than a threshold are blocked” & Col. 4 ln 24-32 “ The scale may be divided into different tiers (e.g., 3 tiers) that correspond to, e.g., low, elevated, and high risk. For example, in an embodiment where higher scores indicate a prediction that the attempted charge has a higher likelihood of being fraudulent, scores between 0.00 and 0.33 (inclusive) may be considered to be low risk, scores between 0.34 and 0.67 (inclusive) may be considered to be elevated risk, and scores at or above 0.68 may be considered to be high risk.”) Interpretation: a charge, which corresponds to an event, if has risk higher than a threshold, a probability score, than such charge is blocked; or,
(2) transmitting, to the second computing device, a notification that causes the second computing device to request additional authentication from the user, when the probability score of the event is higher than the second threshold and lower than the first threshold (Col. 4 ln 24-32 “ The scale may be divided into different tiers (e.g., 3 tiers) that correspond to, e.g., low, elevated, and high risk. For example, in an embodiment where higher scores indicate a prediction that the attempted charge has a higher likelihood of being fraudulent, scores between 0.00 and 0.33 (inclusive) may be considered to be low risk, scores between 0.34 and 0.67 (inclusive) may be considered to be elevated risk, and scores at or above 0.68 may be considered to be high risk.” & Col. 33, “ . . . receiving a request for the risk explanation for one of the plurality of purchase transactions processed by the system due to the purchase transaction matching an active user rule; in which the processed purchase transaction has a fraud likelihood score generated via the risk model which exceeds a risk threshold configurable at the system; and in which the default behavior to reject the purchase transaction is overridden by the active user rule specifying the matching purchase transaction is to be processed . . .”) requesting for request explanation corresponds to requesting additional authentication by the user.
It would be obvious to one of ordinary skill in the art before effective filing date to combine the features of determining a potentially fraudulent transaction based on threshold generated by machine learning model as taught by Manapat with the invention of detecting transaction in real time as disclosed by Unser to better provide a security measure to real-time transaction (Abstract). Therefore, the combination is obvious.
Claim 2, 12 and 18 are grouped together. Claim 2 is disclosed, for instance: Unser in view of Manapat teaches: The method of claim 1. Unser further teaches: wherein accessing the one or more batch features comprises: searching, by the server computer system, a list of batch features in an aggregation of historical event data based on one or more search keys of the event, and selecting, by the server computer system, the one or more batch features corresponding to the one or more attributes associated with the event from the list of batch feature (Unser, see at least par. [0035] “. . . Filtering of the data may be based on one or more of transaction purchase price (amount), merchant identifier, and terminal identifier associated with the particular merchant terminal at which the transaction was made. Filtering of the data based on time sequencing of transaction events and temporal intervals (e.g. last five years' data, seasonal date ranges, etc.), may be applied to further target particular aspects of the transaction data for given applications . . .) Interpretation: filtering data based on certain features such as purchase price, merchant identifier and other time sequence of events, under BRI, correspond to searching a list of batch features of historical events based on search keys (such as merchant identifier and/or search ranges).
Claims 5, 14 and 20 are grouped together. Unser in view of Manapat teaches: The method of claim 1. Unser, furthermore, teaches: wherein a batch feature of the one or more batch features includes at least one of a mean value or a standard deviation of an attribute per session of multiple sessions of the event (Unser, par. [0006] “. . . the determined average may be calculated as the arithmetic average (mean). In other embodiments, the average may be calculated as the median, mode, geometric mean and/or weighted average . . .” & par. [0021] “For a given transaction record that omits direct itemization, analysis of the record is performed using the aforementioned factors and applied back into that particular payment card transaction data, in order to determine or forecast in real time what type or category of product was purchased in that particular transaction.”).
Claims 3, 13 and 19 are grouped together. Unser in view of Manapat teaches: wherein the one or more search keys are determined based on the one or more attributes associated with the event (Unser, see at least par. [0035] “. . . Filtering of the data may be based on one or more of transaction purchase price (amount), merchant identifier, and terminal identifier associated with the particular merchant terminal at which the transaction was made. Filtering of the data based on time sequencing of transaction events and temporal intervals (e.g. last five years' data, seasonal date ranges, etc.), may be applied to further target particular aspects of the transaction data for given applications . . .) Interpretation: filtering data based on certain features such as purchase price, merchant identifier and other time sequence of events, under BRI, correspond to searching a list of batch features of historical events based on search keys (such as merchant identifier and/or search ranges).
Claims 8, 21, and 22 are taught by: Unser in view of Manapat disclosed: the method of claim 1. Unser further teaches: wherein: the one or more batch features are generated and stored in an offline operation real time features are associated with the computing system (Unser, see at least par. [0020] “. . . he predictive model may be enhanced with additional external data (e.g. merchant transactions information relating to specific purchases, and/or other external data relating to purchase transactions contained in the transactions database) . . .” external data stored in the transactions database corresponds to offline operation & par. [0021] “ For a given transaction record that omits direct itemization, analysis of the record is performed using the aforementioned factors and applied back into that particular payment card transaction data, in order to determine or forecast in real time what type or category of product was purchased in that particular transaction.
Claim 10, 15 and 23 are grouped together. Unser in view of Manapat teaches claim 10, for instance: the method of claim 1. Manapat further teaches: further comprising: creating, by the server computer system, feedback data based on the blocking of the performance of the event, wherein the machine learning model is retrained based at least in part on the feedback data (Manapat, col. 3 ln 55-65) The cited portion discloses a feedback loop for machine learning retrain.
It would be obvious to one of ordinary skill in the art before effective filing date to combine the features of feedback loop as taught by Manapat with the invention of detecting transaction in real time as disclosed by Unser in view of Manapat to better provide a security measure to real-time transaction. Therefore, the combination is obvious.
Claims 24 and 25 are grouped together. For instance, claim 24 is disclosed: Unser in view of Manapat teaches: The method of claim 1. However, Manapat further teaches: wherein the third feature set is a merchant specific feature that facilitates fraud determination (Col. 4 ln 35-42) the set is subject to fraud determination steps.
It would be obvious to one of ordinary skill in the art before effective filing date to combine the features of determining a potentially fraudulent transaction based on threshold generated by machine learning model as taught by Manapat with the invention of detecting transaction in real time as disclosed by Unser in view of Manapat to better provide a security measure to real-time transaction. Therefore, the combination is obvious.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOAN DUC BUI whose telephone number is (571)272-0833. The examiner can normally be reached M-F 8-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mike W. Anderson can be reached on (571) 270-0508 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TOAN DUC BUI/Examiner, Art Unit 3693
/ELIZABETH H ROSEN/Primary Examiner, Art Unit 3693