Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This non-final office action is in response to the Application filed on 09/21/2023, which is a CON of 16/037,418 Filing Date 07/17/2018 which is a CON of 17/156,972 Filing Date 01/25/2021.
Claim(s) 21-40 are pending for examination. Claim(s) 21, 32, 40 is/are independent claim(s).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21, 32, 40 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 11 of U.S. Patent No. 10,901,577. Claims 21, 22, 26, 28-30, 32, 33, 37, 39, 40 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-16 of U.S. Patent No. 11,803,290. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims appear to be a reworded broader version of the allowed claims, see table below.
Patent No. 10,901,577
App. No. 18/471966
1. A computer-implemented method comprising: executing, by one or more computing devices, a first application;
receiving, by the one or more computing devices and from the first application executed by the one or more computing devices, data indicating information that has been one or more of presented by or input into the first application;
executing, by the one or more computing devices, a second application that is different and distinct from the first application;
determining, by the one or more computing devices based at least in part on one or more functions of the second application, a subset of information from the information that has been one or more of presented by or input into the first application;
in response to executing the second application and determining the subset of information, automatically generating, by the one or more computing devices, based at least in part on the subset of information, one or more suggested candidate inputs for one or more input fields associated with the second application;
displaying, by the one or more computing devices and concurrently with display of the second application, an interface comprising one or more options to select at least one suggested candidate input of the one or more suggested candidate inputs for entry into at least one of the one or more input fields associated with the second application; and
responsive to receiving data indicating a selection of a particular suggested candidate input of the one or more suggested candidate inputs via the interface;
communicating; by the one or more computing devices and to the second application, data indicating the particular suggested candidate input selected for entry into at least one of the one or more input fields associated with the second application.
5. The computer-implemented method of claim 1, comprising: utilizing, by the one or more computing devices, a machine learning (ML) model to determine, for each suggested candidate input of the one or more suggested candidate inputs, a predicted likelihood that a user will select the suggested candidate input via the interface; and determining, by the one or more computing devices, for each suggested candidate input of the at least one suggested candidate input, and based at least in part on the predicted likelihood that the user will select the suggested candidate input via the interface, to include an option in the interface for selecting the suggested candidate input.
11. The computer-implemented method of claim 5, comprising updating; by the one or more computing devices, the ML model based at least in part on data generated by a plurality of different computing devices via a federated-learning paradigm.
21. (New) A computer-implemented method comprising:
obtaining, by one or more computing devices, user interaction data for a user and a first application, wherein the user interaction data is indicative of information that has been one or more of presented by or input into the first application;
obtaining, by the one or more computing devices, contextualization data indicative of one or more aspects of at least one of the user, the first application, or the user interaction data;
determining, by the one or more computing devices based at least in part on the user interaction data, a plurality of suggested candidate inputs for an input field associated with a second application;
based at least in part on the contextualization data, using, by the one or more computing devices, a machine-learned model to determine, for each of the plurality of suggested candidate inputs, a likelihood that the user will select the suggested candidate input;
receiving, by the one or more computing devices, information indicative of a user input that selects a first suggested candidate input of the plurality of suggested candidate inputs; and
sending, by the one or more computing devices, federated model update information to a computing system, wherein the computing system comprises a model trainer that generates updates for the machine-learned model via federated learning, and wherein the federated model update information is based at least in part on:
the information indicative of the user input that selects the first suggested candidate input, and
the likelihood that the user will select the first suggested candidate input.
Patent No. 10,901,577
App. No. 18/471966
1. A computer-implemented method comprising: obtaining, by one or more computing devices, user interaction data for a user and a first application, wherein the user interaction data is indicative of information that has been one or more of presented by or input into the first application;
obtaining, by the one or more computing devices, contextualization data indicative of one or more aspects of at least one of the user or the first application, wherein the contextualization data comprises application data indicative of at least one of an application type or an application description for one or more of the first application or the second application;
determining, by the one or more computing devices based at least in part on the user interaction data, a plurality of suggested candidate inputs for an input field associated with a second application;
based at least in part on the contextualization data, using, by the one or more computing devices, a machine-learned model to determine, for each of the plurality of suggested candidate inputs, a likelihood that the user will select the suggested candidate input;
selecting, by the one or more computing devices, a subset of suggested candidate inputs from the plurality of suggested candidate inputs based on the likelihood that the user will select each suggested candidate input of the subset of suggested candidate inputs, wherein the subset of suggested candidate inputs corresponds to the first application type;
displaying, by the one or more computing devices and concurrently with display of the second application, an interface respectively comprising two or more options to select two or more respective suggested candidate inputs of the subset of suggested candidate inputs for entry into the input field associated with the second application.
21. A computer-implemented method comprising:
obtaining, by one or more computing devices, user interaction data for a user and a first application, wherein the user interaction data is indicative of information that has been one or more of presented by or input into the first application;
obtaining, by the one or more computing devices, contextualization data indicative of one or more aspects of at least one of the user, the first application, or the user interaction data;
determining, by the one or more computing devices based at least in part on the user interaction data, a plurality of suggested candidate inputs for an input field associated with a second application;
based at least in part on the contextualization data, using, by the one or more computing devices, a machine-learned model to determine, for each of the plurality of suggested candidate inputs, a likelihood that the user will select the suggested candidate input;
receiving, by the one or more computing devices, information indicative of a user input that selects a first suggested candidate input of the plurality of suggested candidate inputs; and
sending, by the one or more computing devices, federated model update information to a computing system, wherein the computing system comprises a model trainer that generates updates for the machine-learned model via federated learning, and wherein the federated model update information is based at least in part on:
the information indicative of the user input that selects the first suggested candidate input, and
the likelihood that the user will select the first suggested candidate input.
2. The computer-implemented method of claim 1, wherein the contextualization data further comprises at least one of:
historical user data associated with the user; or location data associated with the user.
22. The computer-implemented method of claim 21, wherein the contextualization data further comprises at least one of:
historical user data associated with the user; or
location data associated with the user.
3. The computer-implemented method of claim 1, wherein the application type comprises a:
web browser application; social media application; calendar application; messaging application; mapping application; or textual input application.
26. The computer-implemented method of claim 21, wherein the first application comprises a:
web browser application;
social media application;
calendar application;
messaging application;
mapping application; or textual input application.
4. The computer-implemented method of claim 1, wherein obtaining the user interaction data for a user and a first application, wherein the user interaction data is indicative of information that has been one or more of presented by or input into the first application comprises obtaining, by the one or more computing devices, the user interaction data from one or more application programming interfaces of the first application, wherein the user interaction data is indicative of the information that has been one or more of presented by or input into the first application.
28. The computer-implemented method of claim 21, wherein obtaining the user interaction data for the user and the first application comprises:
obtaining, by the one or more computing devices, the user interaction data from one or more application programming interfaces of the first application, wherein the user interaction data is indicative of the information that has been one or more of presented by or input into the first application.
5. The computer-implemented method of claim 1, wherein the machine-learned model is trained based at least in part on data associated with the user or data associated with the first application.
29. The computer-implemented method of claim 21, wherein the machine-learned model is trained based at least in part on data associated with the user or data associated with the first application.
6. The computer-implemented method of claim 1, wherein the machine-learned model is configured to determine a probability value for each of the plurality of suggested candidate inputs, wherein the probability value indicates a likelihood of the user selecting the suggested candidate input.
30. The computer-implemented method of claim 21, wherein the machine-learned model is configured to determine a probability value for each of the plurality of suggested candidate inputs, wherein the probability value indicates a likelihood of the user selecting the suggested candidate input.
7. The computer-implemented method of claim 6, wherein the one or more suggested candidate inputs are respectively associated with the one or more highest probability values.
31. The computer-implemented method of claim 30, wherein the one or more suggested candidate inputs are respectively associated with one or more highest probability values.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gershony; Ori et al. US Pub. No. 2017/0180276 (Gershony) in view of Thakurta; Abhradeep Guha et al. US Pub. No. 20170359364 (Thakurta).
Claim 21:
Gershony teaches:
A computer-implemented method comprising:
obtaining, by one or more computing devices, user interaction data for a user and a first application, wherein the user interaction data is indicative of information that has been one or more of presented by or input into the first application [¶ 0027] (suggested response may be further based on at least one of sensor data, one or more preferences, a conversation history, and one or more recent activities performed by each of the other participants) [¶ 0082] (location history) [¶ 0096, 105, 114, 120] (conversation history, messaging application, third-party application) [¶ 103] (purchase history);
obtaining, by the one or more computing devices, contextualization data indicative of one or more aspects of at least one of the user, the first application, or the user interaction data [¶ 0020-27, 67, 96-99, 108-109] (contextual indicator, can include conversation history, context of the message, the first user, and the other users is determined. The context may include an event or a holiday. In another example, the context is that the message is a request for an estimated time of arrival of the user);
determining, by the one or more computing devices based at least in part on the user interaction data, a plurality of suggested candidate inputs for an input field associated with a second application [¶ 0108-120] (suggested response may be based on using machine learning to develop a personalized model for a second user. The messaging application 103 may generate a machine learning model and use the machine learning model to generate the suggested response by filtering examples from a corpus of messages or conversations, train a neural network to suggest responses based on the examples, and modify the suggested responses based on personalization of the suggested responses based on information associated with the second user);
based at least in part on the contextualization data, using, by the one or more computing devices, a machine-learned model to determine, for each of the plurality of suggested candidate inputs, a likelihood that the user will select the suggested candidate input [¶ 0111, 121-123] (highest ranked suggested response) [¶ 0120] (probability distribution);
receiving, by the one or more computing devices, information indicative of a user input that selects a first suggested candidate input of the plurality of suggested candidate inputs [¶ 0100-101, 119, 122] (Fig. 4A, 4B, Suggestions “Cute!” and “Sunny smile” may be selected based on being suggested responses for “baby pictures” and “Merry Christmas!” selected based on being a suggested response for “Santa”); and
Gershony does not appear to explicitly disclose “federated learning”.
However, the disclosure of Thakurta teaches:
sending, by the one or more computing devices, federated model update information to a computing system, wherein the computing system comprises a model trainer that generates updates for the machine-learned model via federated learning [¶ 0014, 19, 27, 30, 33, 45, 65, 92, 108, 124] (train model on crowdsourced data, generating and updating term frequencies of known terms using crowdsourced differentially private sketches of the known terms) [¶ 0013, 44, 47, 56, 58, 65, 100, 108, 111, 114] (update trained model, update frequencies, update asset catalog), and wherein the federated model update information is based at least in part on:
the information indicative of the user input that selects the first suggested candidate input [¶ 0012] (rank the most frequently used terms toward the top of a list of suggested terms), and
the likelihood that the user will select the first suggested candidate input [¶ 0012] (rank the most frequently used terms toward the top of a list of suggested terms).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of text suggestions in Gershony and the method of text suggestion in Thakurta, with a reasonable expectation of success.
The motivation for doing so would have been the use of known technique to improve similar devices (methods, or products) in the same way; (See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(D)).
The know technique of crowd sourcing or “federated” in Thakurta could be applied to the text suggestion in Gershony. Thakurta and Gershony are similar devices because each make text suggestions. One of ordinary skill in the art would have recognized that applying the known technique would improve the similar devices and resulted in an improved system, with a reasonable expectation of success, to provide crowd sourced data and maintain privacy [Thakurta: ¶ 0005-06].
Claim 22:
Gershony teaches:
The computer-implemented method of claim 21, wherein the contextualization data further comprises at least one of:
historical user data associated with the user; or
location data associated with the user [¶ 0027] (suggested response may be further based on at least one of sensor data, one or more preferences, a conversation history, and one or more recent activities performed by each of the other participants) [¶ 0082] (location history) [¶ 0096, 105, 114, 120] (conversation history, messaging application, third-party application) [¶ 103] (purchase history).
Claim 23:
Thakurta teaches:
The computer-implemented method of claim 21, wherein determining the plurality of suggested candidate inputs for an input field associated with a second application comprises:
processing, by the one or more computing devices, one or more inputs comprising the user interaction data with a language model to obtain a model output, wherein the language model is trained to perform multiple language tasks, and wherein the model output comprises the plurality of suggested candidate inputs for the input field associated with the second application [¶ 0014, 19, 27, 30, 33, 45, 65, 92, 108, 124] (train model on crowdsourced data, generating and updating term frequencies of known terms using crowdsourced differentially private sketches of the known terms) [¶ 0013, 44, 47, 56, 58, 65, 100, 108, 111, 114] (update trained model, update frequencies, update asset catalog).
Claim 24:
Gershony teaches:
The computer-implemented method of claim 23, wherein the one or more inputs further comprises textual content descriptive of instructions to generate the plurality of suggested candidate inputs [¶ 0100-101, 119, 122] (Fig. 4A, 4B, Suggestions “Cute!” and “Sunny smile” may be selected based on being suggested responses for “baby pictures” and “Merry Christmas!” selected based on being a suggested response for “Santa”) [¶ 0111, 121-123] (highest ranked suggested response) [¶ 0120] (probability distribution).
Claim 25:
Gershony teaches:
The computer-implemented method of claim 23, wherein the contextualization data comprises a model output of the language model [¶ 0020-27, 67, 96-99, 108-109] (contextual indicator, can include conversation history, context of the message, the first user, and the other users is determined. The context may include an event or a holiday. In another example, the context is that the message is a request for an estimated time of arrival of the user).
Claim 26:
Gershony teaches:
The computer-implemented method of claim 21, wherein the first application comprises a:
web browser application [¶ 0058, 62-65] (browser, web page);
social media application [¶ 0027, 49, 53, 62, 64, 101, 113, 132] (social media);
calendar application [¶ 0027, 60, 64, 98, 109, 113] (calendar);
messaging application [¶ 0049-52] (messaging application);
mapping application [¶ 0027, 62-66, 98-101, 107] (mapping application); or
textual input application.
Claim 27:
Gershony teaches:
The computer-implemented method of claim 21, wherein obtaining the contextualization data comprises:
obtaining, by the one or more computing devices, the contextualization data indicative of the one or more aspects of at least one of the user or the first application, wherein the contextualization data comprises application data indicative of at least one of an application type or an application description for one or more of the first application or the second application [¶ 0020-27, 67, 96-99, 108-109] (contextual indicator, can include conversation history, context of the message, the first user, and the other users is determined. The context may include an event or a holiday. In another example, the context is that the message is a request for an estimated time of arrival of the user) [¶ 0108-120] (suggested response may be based on using machine learning to develop a personalized model for a second user. The messaging application 103 may generate a machine learning model and use the machine learning model to generate the suggested response by filtering examples from a corpus of messages or conversations, train a neural network to suggest responses based on the examples, and modify the suggested responses based on personalization of the suggested responses based on information associated with the second user).
Claim 28:
Gershony teaches:
The computer-implemented method of claim 21, wherein obtaining the user interaction data for the user and the first application comprises:
obtaining, by the one or more computing devices, the user interaction data from one or more application programming interfaces of the first application, wherein the user interaction data is indicative of the information that has been one or more of presented by or input into the first application [¶ 0027] (suggested response may be further based on at least one of sensor data, one or more preferences, a conversation history, and one or more recent activities performed by each of the other participants) [¶ 0082] (location history) [¶ 0096, 105, 114, 120] (conversation history, messaging application, third-party application) [¶ 103] (purchase history).
Claim 29:
Thakurta teaches:
The computer-implemented method of claim 21, wherein the machine-learned model is trained based at least in part on data associated with the user or data associated with the first application [¶ 0014, 19, 27, 30, 33, 45, 65, 92, 108, 124] (train model on crowdsourced data, generating and updating term frequencies of known terms using crowdsourced differentially private sketches of the known terms) [¶ 0013, 44, 47, 56, 58, 65, 100, 108, 111, 114] (update trained model, update frequencies, update asset catalog).
Claim 30:
Gershony teaches:
The computer-implemented method of claim 21, wherein the machine-learned model is configured to determine a probability value for each of the plurality of suggested candidate inputs, wherein the probability value indicates a likelihood of the user selecting the suggested candidate input [¶ 0111, 121-123] (highest ranked suggested response) [¶ 0120] (probability distribution).
Thakurta teaches: [¶ 0012] (rank the most frequently used terms toward the top of a list of suggested terms).
Claim 31:
Gershony teaches:
The computer-implemented method of claim 30, wherein the one or more suggested candidate inputs are respectively associated with one or more highest probability values [¶ 0111, 121-123] (highest ranked suggested response) [¶ 0120] (probability distribution).
Thakurta teaches: [¶ 0012] (rank the most frequently used terms toward the top of a list of suggested terms).
Claims 32-40:
Claim(s) 32, 40 is/are substantially similar to claim 21 and is/are rejected using the same art and the same rationale.
Claim 21 is a “method” claim, claim 32 is a “system” claim and claim 40 is a “medium” claim, but the steps or elements of each claim are essentially the same.
Claim(s) 33 is/are substantially similar to claim 22 and is/are rejected using the same art and the same rationale.
Claim(s) 34 is/are substantially similar to claim 23 and is/are rejected using the same art and the same rationale.
Claim(s) 35 is/are substantially similar to claim 24 and is/are rejected using the same art and the same rationale.
Claim(s) 36 is/are substantially similar to claim 25 and is/are rejected using the same art and the same rationale.
Claim(s) 37 is/are substantially similar to claim 26 and is/are rejected using the same art and the same rationale.
Claim(s) 38 is/are substantially similar to claim 27 and is/are rejected using the same art and the same rationale.
Claim(s) 39 is/are substantially similar to claim 30 and is/are rejected using the same art and the same rationale.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Please See PTO-892: Notice of References Cited.
Evidence of the level skill of an ordinary person in the art for Claim 1:
Desjardins; Patrick US 20180356957 data collected in real-time to update ranking and filtering for presenting insertion points for a selected emoji. Contextual analysis of a communication as well as learning modeling (e.g. based on signal data associated with past actions from a user or plurality of users) can be utilized to improve processing, for example, to predict which words a user may want to replace.
Leydon; Gabriel et al. US 20160004413 A1 crowd sourcing, update statistical usage; statistical usage may be based on usage by a single user or by a plurality of users. user favorite, user preference, highest usage.
Citations to Prior Art
A reference to specific paragraphs, columns, pages, or figures in a cited prior art reference is not limited to preferred embodiments or any specific examples. It is well settled that a prior art reference, in its entirety, must be considered for all that it expressly teaches and fairly suggests to one having ordinary skill in the art. Stated differently, a prior art disclosure reading on a limitation of Applicant's claim cannot be ignored on the ground that other embodiments disclosed were instead cited. Therefore, the Examiner's citation to a specific portion of a single prior art reference is not intended to exclusively dictate, but rather, to demonstrate an exemplary disclosure commensurate with the specific limitations being addressed. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968". In re: Upsher-Smith Labs. v. Pamlab, LLC, 412 F.3d 1319, 1323,75 USPQ2d 1213,1215 (Fed. Cir. 2005); In re Fritch, 972 F.2d 1260, 1264,23 USPQ2d 1780, 1782 (Fed. Cir. 1992); Merck & Co. v. Biocraft Labs., Inc., 874 F.2d 804, 807,10 USPQ2d 1843, 1846 (Fed. Cir. 1989); In re Fracalossi, 681 F.2d 792,794 n.1, 215 USPQ 569, 570 n.1 (CCPA 1982); In re Lamberti, 545 F.2d 747, 750, 192 USPQ 278, 280 (CCPA 1976); In re Bozek, 416 F.2d 1385,1390,163 USPQ 545, 549 (CCPA 1969).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN J SMITH whose telephone number is (571)270-3825. The examiner can normally be reached Monday - Friday 11:00 - 7:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ADAM QUELER can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Benjamin Smith/Primary Examiner, Art Unit 2172 Direct Phone: 571-270-3825
Direct Fax: 571-270-4825
Email: benjamin.smith@uspto.gov