DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 12,386,421. Although the claims at issue are not identical, they are not patentably distinct, as will be explained after the following comparison chart:
Instant Application
US Pat. No. 12,386, 421
1. A method comprising: filtering graphical data of a screen object to generate a High Spatial Frequency (HSF) version of the screen object; generate a modulated HSF version of the screen object by applying a characteristic modulation to the HSF version of the screen object; generating display signals including visual stimuli corresponding to the modulated HSF version of the screen object; displaying the display signals on a display screen; receiving neural signals of a user from a neural signal capture device while the user gazes at the display screen; determining the neural signals received when the user gazes at the display screen include a neural signature of the visual stimuli; and in response to determining the neural signals include the neural signature, identifying an object of focus of the user in the display signals displayed on the display screen when, the object of focus being a display object on the display screen that coincides with the visual stimuli.
1. A method of tracking visual attention, the method comprising: obtaining a base image; generating a visual stimulus by performing operations comprising: processing a portion of the base image to extract high spatial frequency (HSF) components of the base image, the visual stimulus having a characteristic modulation, the characteristic modulation being applied to the HSF components of the base image, the HSF components having a spatial frequency higher than a predetermined high spatial frequency threshold; processing the portion of the base image to generate low spatial frequency (LSF) components of the base image, the LSF components having a spatial frequency lower than a predetermined low spatial frequency threshold; applying at least one of a spatial frequency filter or a spatial frequency transform to the portion of the base image; and storing the LSF components and the HSF components as LSF map data and HSF map data respectively; displaying the visual stimulus via a graphical user interface (GUI) of a display; receiving neural signals of a user from a neural signal capture device; and on the basis of detecting information associated with the characteristic modulation of the visual stimulus in the neural signals, determining a point of focus of the user in the viewed image, wherein generating the stimulus further comprises: on the basis of determining the HSF map data does not fulfil a predetermined spatial frequency threshold, performing operations comprising: generating enhanced HSF map data, thereby introducing additional HSF stimulation in the visual stimulus; and applying a temporal modulation to the enhanced HSF map data.
2. The method of claim 1, wherein the visual stimuli are overlay objects different from the screen object and displayed over the display object.
2. The method of claim 1, wherein generating enhanced HSF map data further comprises: on the basis of determining the luminosity of the portion of the base image falls within a predetermined range, generating an HSF noise map for use as the enhanced HSF map data.
3. The method of claim 1, wherein the neural signature comprises information associated with the characteristic modulation of the visual stimuli.
3. The method of claim 1, wherein generating enhanced HSF map data further comprises: on a basis of determining the luminosity of the portion of the base image does not fall within a predetermined range, performing operations comprising: generating a gray overlay map; and generating an HSF noise map for use as the enhanced HSF map data, and wherein displaying the visual stimulus further comprises: merging the enhanced HSF map data with the gray overlay map to provide a blended image; and displaying the blended image via the GUI.
4. The method of claim 1, further comprising: filtering graphical data of the screen object, to generate a Low Spatial Frequency (LSF) version of the screen object; and encoding a static display signal using the LSF version of the screen object.
4. The method of claim 3, wherein the HSF noise map comprises a hard noise map.
5. The method of claim 1, wherein the graphical data of the screen object is filtered using at least one of a spatial frequency filter or a spatial frequency transform.
5. The method of claim 1, wherein displaying the visual stimulus further comprises: merging the visual stimulus with the base image to provide a blended image; and displaying the blended image via the GUI.
6. The method of claim 1, wherein determining whether the received neural signals include the neural signature of the visual stimuli comprises: performing spectral analysis on the received neural signals to determine spectral characteristics of the received neural signals; and determining whether the spectral characteristics of the received neural signals correspond to a spectrum associated with the characteristic modulation of the visual stimuli.
6. A computing apparatus comprising: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the computing apparatus to perform operations comprising: obtaining a base image; generating a visual stimulus by performing operations comprising: processing a portion of the base image to extract high spatial frequency (HSF) components of the base image, the visual stimulus having a characteristic modulation, the characteristic modulation being applied to the HSF components of the base image, the HSF components having a spatial frequency higher than a predetermined high spatial frequency threshold; processing the portion of the base image to generate low spatial frequency (LSF) components of the base image, the LSF components having a spatial frequency lower than a predetermined low spatial frequency threshold; applying at least one of a spatial frequency filter or a spatial frequency transform to the portion of the base image; and storing the LSF components and the HSF components as LSF map data and HSF map data respectively; displaying the visual stimulus via a graphical user interface (GUI) of a display; receiving neural signals of a user from a neural signal capture device; and on the basis of detecting information associated with the characteristic modulation of the visual stimulus in the neural signals, determining a point of focus of the user in the viewed image, wherein the instructions that, when executed by the one or more processors, cause the computing apparatus to perform operations comprising generating the stimulus further cause the computing apparatus to perform operations comprising: on the basis of determining the HSF map data does not fulfil a predetermined spatial frequency threshold, performing operations comprising: generating enhanced HSF map data, thereby introducing additional HSF stimulation in the visual stimulus; and applying a temporal modulation to the enhanced HSF map data.
7. The method of claim 1, wherein the neural signal capture device includes an EEG helmet comprising electrodes.
7. The computing apparatus of claim 6 wherein the instructions that, when executed by the one or more processors, cause the computing apparatus to perform operations comprising enhanced HSF map data further cause the computer to perform operations comprising: on the basis of determining the luminosity of the portion of the base image falls within a predetermined range, generating an HSF noise map for use as the enhanced HSF map data.
8. A machine comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the machine to perform operations comprising: filtering graphical data of a screen object to generate a High Spatial Frequency (HSF) version of the screen object; generate a modulated HSF version of the screen object by applying a characteristic modulation to the HSF version of the screen object; generating display signals including visual stimuli corresponding to the modulated HSF version of the screen object; displaying the display signals on a display screen; receiving neural signals of a user from a neural signal capture device while the user gazes at the display screen; determining the neural signals received when the user gazes at the display screen include a neural signature of the visual stimuli; and in response to determining the neural signals include the neural signature, identifying an object of focus of the user in the display signals displayed on the display screen when, the object of focus being a display object on the display screen that coincides with the visual stimuli.
8. The computing apparatus of claim 6, wherein the instructions that, when executed by the one or more processors, cause the computing apparatus to perform operations comprising generating enhanced HSF map further cause the computing apparatus to perform operations comprising: on a basis of determining the luminosity of the portion of the base image does not fall within a predetermined range, performing operations comprising: generating a gray overlay map; and generating an HSF noise map for use as the enhanced HSF map data, and wherein displaying the visual stimulus further comprises: merging the enhanced HSF map data with the gray overlay map to provide a blended image; and displaying the blended image via the GUI.
9. The machine of claim 8, wherein the visual stimuli are overlay objects different from the screen object and displayed over the display object.
9. The computing apparatus of claim 8, wherein the HSF noise map comprises a hard noise map.
10. The machine of claim 8, wherein the neural signature comprises information associated with the characteristic modulation of the visual stimuli.
10. The computing apparatus of claim 6, wherein the instructions that, when executed by the one or more processors, cause the computing apparatus to perform operations comprising displaying the or each visual stimulus further cause the computing apparatus to perform operations comprising: merging the visual stimulus with the base image to provide a blended image; and displaying the blended image via the GUI.
11. The machine of claim 8, wherein the operations further comprise: filtering graphical data of the screen object, to generate a Low Spatial Frequency (LSF) version of the screen object; and encoding a static display signal using the LSF version of the screen object.
11. A non-transitory computer-readable storage medium, the non-statutory computer-readable storage medium including instructions that when executed by one or more processors of a computer, cause the computer to perform operations comprising: obtaining a base image; generating a visual stimulus by performing operations comprising: processing a portion of the base image to extract high spatial frequency (HSF) components of the base image, the visual stimulus having a characteristic modulation, the characteristic modulation being applied to the HSF components of the base image, the HSF components having a spatial frequency higher than a predetermined high spatial frequency threshold; processing the portion of the base image to generate low spatial frequency (LSF) components of the base image, the LSF components having a spatial frequency lower than a predetermined low spatial frequency threshold; applying at least one of a spatial frequency filter or a spatial frequency transform to the portion of the base image; and storing the LSF components and the HSF components as LSF map data and HSF map data respectively; displaying the visual stimulus via a graphical user interface (GUI) of a display; receiving neural signals of a user from a neural signal capture device; and on the basis of detecting information associated with the characteristic modulation of the visual stimulus in the neural signals, determining a point of focus of the user in the viewed image, wherein the instructions that, when executed by the one or more processors, cause the computer to perform operations comprising generating the stimulus further cause the computer to perform operations comprising: on the basis of determining the HSF map data does not fulfil a predetermined spatial frequency threshold, performing operations comprising: generating enhanced HSF map data, thereby introducing additional HSF stimulation in the visual stimulus; and applying a temporal modulation to the enhanced HSF map data.
12. The machine of claim 8, wherein the graphical data of the screen object is filtered using at least one of a spatial frequency filter or a spatial frequency transform.
12. The non-statutory computer-readable storage medium of claim 11, wherein the instructions that, when executed by the one or more processors, cause the computer to perform operations comprising generating enhanced HSF map data further cause the computer to perform operations comprising: on the basis of determining the luminosity of the portion of the base image falls within a predetermined range, generating an HSF noise map for use as the enhanced HSF map data.
13. The machine of claim 8, wherein determining whether the received neural signals include the neural signature of the visual stimuli comprises: performing spectral analysis on the received neural signals to determine spectral characteristics of the received neural signals; and determining whether the spectral characteristics of the received neural signals correspond to a spectrum associated with the characteristic modulation of the visual stimuli.
13. The non-statutory computer-readable storage medium of claim 11, wherein the computer instructions that, when executed by the one or more processors cause the computer to perform operations comprising generating enhanced HSF map data further cause the computer to perform operations comprising: on a basis of determining the luminosity of the portion of the base image does not fall within a predetermined range, performing operations comprising: generating a gray overlay map; and generating an HSF noise map for use as the enhanced HSF map data, and wherein displaying the visual stimulus further comprises: merging the enhanced HSF map data with the gray overlay map to provide a blended image; and displaying the blended image via the GUI.
14. The machine of claim 8, wherein the neural signal capture device includes an EEG helmet comprising electrodes.
14. The non-statutory computer-readable storage medium of claim 13, wherein the HSF noise map comprises a hard noise map.
15. A machine-readable medium including instructions that, when executed by a machine, cause the machine to perform operations comprising: filtering graphical data of a screen object to generate a High Spatial Frequency (HSF) version of the screen object; generate a modulated HSF version of the screen object by applying a characteristic modulation to the HSF version of the screen object; generating display signals including visual stimuli corresponding to the modulated HSF version of the screen object; displaying the display signals on a display screen; receiving neural signals of a user from a neural signal capture device while the user gazes at the display screen; determining the neural signals received when the user gazes at the display screen include a neural signature of the visual stimuli; and in response to determining the neural signals include the neural signature, identifying an object of focus of the user in the display signals displayed on the display screen when, the object of focus being a display object on the display screen that coincides with the visual stimuli.
15. The non-statutory computer-readable storage medium of claim 11, wherein the instructions that, when executed by the one or more processors, cause the computer to perform operations comprising displaying the visual stimulus further cause the computer to perform operations comprising: merging the visual stimulus with the base image to provide a blended image; and displaying the blended image via the GUI.
16. The machine-readable medium of claim 15, wherein the visual stimuli are overlay objects different from the screen object and displayed over the display object.
17. The machine-readable medium of claim 15, wherein the neural signature comprises information associated with the characteristic modulation of the visual stimuli.
18. The machine-readable medium of claim 15, wherein the operations further comprise: filtering graphical data of the screen object, to generate a Low Spatial Frequency (LSF) version of the screen object; and encoding a static display signal using the LSF version of the screen object.
19. The machine-readable medium of claim 15, wherein determining whether the received neural signals include the neural signature of the visual stimuli comprises: performing spectral analysis on the received neural signals to determine spectral characteristics of the received neural signals; and determining whether the spectral characteristics of the received neural signals correspond to a spectrum associated with the characteristic modulation of the visual stimuli.
20. The machine-readable medium of claim 15, wherein the neural signal capture device includes an EEG helmet comprising electrodes.
3. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 12,386,421. Although the claims at issue are not identical, they are not patentably distinct, since independent claims 1, 8, and 15, of the instant application include all the limitations of independent claims 1, 6, and 11 of U.S. Patent No. 12,386,421. In other words, all the elements of the instant application 1, 8, and 15, are to be found in U.S. Patent No. 12,386,421 claims 1, 6, and 11. The difference between the application claims 1, 8, and 15, and U.S. Patent No. 12,386,421 claims 1, 6 and 11 lies in the fact that the patents claims 1, 6, and 11 include many more elements and is thus more specific.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 15-20 are rejected under 35 U.S.C. 101 because in claim 15, lines 1-2, it reads, “A machine-readable medium including instructions that, when executed by a machine, cause the machine to perform operations comprising:”. This language does not ensure that the claim does not encompass signals and other transitory forms of signal transmission. Please add the language “non-statutory” to the beginning of claim 15, such as, “A non-transitory machine-readable medium…”. In claims 16-20, the same correction should be made.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kouider et al. (WO 2019048525) in view of Zao et al. (US 2014/0058483).
As to claims 1, 8, and 15, Kouider discloses a machine (Fig. 1B) and method comprising: At least one processor (Fig. 1B, (180); and at least one memory; and at least one memory (Fig. 1B, (181) storing instructions that, when executed by the at least one processor (Fig. 1B, (180), cause the machine to perform operations comprising: generating display signals including visual stimuli (“These display signals encode a plurality of display signals. visual stimuli intended to be presented to the user 101 by means of the display screen 105”); displaying the display signals on a display screen (Fig. 1A, (105), the display signals including at least one object (Fig. 2A, (01, 02, …ON) (“plurality of graphic objects 01, 02, ON to be presented to a user 101 is used. Each of these graphic objects can be an alphanumeric character (number or letter or other character), a logo, an image, a text, a user interface menu item, a user interface button, an avatar, a 3D object etc”); generating a modulated version of the screen object (Fig. 2A, SM1, SM2,…SMN); and the display signals including visual stimuli corresponding to the modulated version of the screen object (“an animated graphic object OA1, OA2, OAN is generated for each graphic object 01, 02, ON corresponding to one or more corresponding elementary transformations and a corresponding modulation signal. The animated graphic object OA1, OA2, ..., OAN thus generated is presented on a display screen object 105”); receiving neural signals of a user from a neural signal capture device (Fig. 1A, (130) while the user gazes at the display screen; determining the neural signals received when the user gazes at the display screen include a neural signature of the visual stimuli (“Step 401 is repeated several times, for each visual stimulus of a plurality of visual stimuli, so as to record the corresponding EEG signals produced by the individual when he focuses his visual attention on the visual stimulus in question”); and in response to determining the neural signals include the neural signature (Fig. 4A, 401)(“ The individual thus switches his attention from one stimulus to another and generates in EEG signals E1, E2, EX at different frequencies according to the focus of visual attention”), identifying an object of focus of the user in the display signals displayed on the display screen when, the object of focus being a display object on the display screen that coincides with the visual stimuli (Fig. 8, (6))[00122](“ a feedback is given to the user on the visual stimulus that has been identified as being observed by the user. The human-machine interface comprises the numbers 0 to 9. In the example of FIG. 7, the user observes the number 6 and the feedback consists in enlarging the number identified as being observed by applying a method for determining the focusing of the visual attention according to the present description. In general, the feedback given to the user can consist in highlighting the visual stimulus identified, for example, by highlighting, blinking, zooming, changing position, changing size or changing color, etc.”).
However, Kouider, further, does not specifically disclose filtering graphical data of a screen object to generate a High Spatial Frequency (HSF) version of the screen object; generate a modulated HSF version of the screen object by applying a characteristic modulation to the HSF version of the screen object; and the display signals including visual stimuli corresponding to the modulated HSF version of the screen object.
Zao discloses filtering graphical data of a screen object to generate a High Spatial Frequency (HSF) version of the screen object [0048, 0051]; generate a modulated HSF version of the screen object by applying a characteristic modulation to the HSF version of the screen object [0048, 0059]; and the display signals including visual stimuli corresponding to the modulated HSF version of the screen object [0055, 0059]. Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to have the high frequency, as taught by Zao, in the device of Kouider, so as to obtain preferable sensitivity to the SSVEP signals [0048].
As to claims 2, 9, and 16, Kouider, further, discloses the visual stimuli are overlay objects different from the screen object and displayed over the display object [00121, 00122].
As to claims 3, 10, and 17, Kouider, further, discloses the neural signature comprises information associated with the characteristic modulation of the visual stimuli (Fig. 4A, 401)(“ The individual thus switches his attention from one stimulus to another and generates in EEG signals E1, E2, EX at different frequencies according to the focus of visual attention”).
As to claims 4, 11 and 18, Kouider, further, does not specifically disclose filtering graphical data of the screen object, to generate a Low Spatial Frequency (LSF) version of the screen object; and encoding a static display signal using the LSF version of the screen object.
Zoe discloses, further, does not specifically disclose filtering graphical data of the screen object, to generate a Low Spatial Frequency (LSF) version of the screen object [0042]; and encoding a static display signal using the LSF version of the screen object [0042, 0043](low, or no flickering perception is associated with low frequency, and a static display signal). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to generate a low frequency version of the screen object, as taught by Zoe, in the device of Kouider, so that the visual or photic stimuli provided one or more viewers neither causes any discomfort nor introduces any pathological side effects such as migraine or seizure attacks to the one or more viewers [0042].
As to claims 5 and 12, Kouider, further, discloses the graphical data of the screen object is filtered using at least one of a spatial frequency filter or a spatial frequency transform (“These signals are periodic sinusoidal signals having different frequencies (and therefore periods) between 1 Hz and 2 Hz in steps of 0.2 Hz, so that any two signals taken in this first set do not have the same frequency. . The phases of these signals may be arbitrary. In the example shown in FIG. 2B, the amplitude of these signals varies between 0% and 100%, meaning that the corresponding degree of transformation varies (with a suitable proportionality coefficient) between a minimum value and a maximum value. For all pairs of modulation signals of this set of signals, the spectral overlap ratio is zero (ie, no common frequency component) and the maximum time correlation coefficient on all pairs of modulation signals is 0.2. , this temporal correlation coefficient being calculated on a correlation window of 4 seconds.”).
As to claims 6, 13, and 19, further, Kouier discloses performing spectral analysis on the received neural signals to determine spectral characteristics of the received neural signals (see paragraphs in reference to Figs. 2B and 2C); and determining whether the spectral characteristics of the received neural signals correspond to a spectrum associated with the characteristic modulation of the visual stimuli (see paragraphs in reference to Figs. 2B and 2C).
As to claims 7, 14, and 20, further, Kouier discloses the neural signal capture device includes an EEG helmet comprising electrodes (“the equipment 130 is configured for the acquisition of EEG signals. This equipment is for example made in the form of a helmet, provided with electrodes intended to come into contact with the skull of the user 101. helmet is for example a helmet manufactured by the company Biosemi ®, equipped with 64 electrodes”).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICARDO OSORIO whose telephone number is (571)272-7676. The examiner can normally be reached M-F 9 AM-5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Boddie can be reached at 571-272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICARDO OSORIO/Primary Examiner, Art Unit 2625