DETAILED ACTION
This action is responsive to the application filed on 11/22/2022. Claims 1-24 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea) without significantly more.
Regarding claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites “A computing system” and thus a machine, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 1 recites “a generative adversarial network…convert the input data from a time domain into a frequency domain…a generative model, and a discriminative model …wherein the generative model and the discriminative model operate in a frequency domain” which describe a process that under its broadest reasonable interpretation encompasses mathematical concepts. That is other than reciting generic computing components (e.g. network controller, accelerator including logic and substrates, transformation hardware) nothing in the claimed elements precludes the steps from practically being performed in the mind with the aid of pen and paper.
For example, the claim discusses a generative adversarial network, performing input data conversion from time domain to frequency domain and a generative model and a discriminative model operating in a frequency domain, thus the limitations encompass mathematical calculations and/or equations (MPEP 2106.04(a)(2)(I)) (see paragraphs [0002, 0023-0026, 0076] and Fig. 3: wherein conversion uses a discrete cosine transform and the models of the GAN comprise convolution operations)
If a claim, limitation, under its broadest reasonable interpretation, covers performance of a mathematical calculation/equation in the mind with the aid of pen and paper but for the recitation of generic computer components then it falls within the “Mathematical concepts” grouping of abstract ideas.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 1 further recites additional elements of
a network controller…accelerator coupled to the network controller, wherein the accelerator includes logic coupled to one or more substrates, the logic including: transformation hardware… the transformation hardware
obtain input data
These additional elements do not integrate the abstract idea into a practical application because (a) recites at a high-level of generality the words “apply it” (or an equivalent) with the judicial exception, or use mere instructions to implement the abstract idea on a computer, or merely uses a computer as a tool to perform the abstract idea (See MPEP 2106.05(f)) (note it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of accelerators (MPEP 2106.05(h)) and (b) recites insignificant extra-solution activity (i.e. data gathering) (See MPEP 2106.05 (g)).
Therefore, claim 1 is directed to the abstract idea.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination, because (a) uses mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of accelerators (MPEP 2106.05(h)); (b) recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)) which the courts have deemed to be well-understood, routine and conventional activities that do not provide significantly more (MPEP 2106.05(d); the courts have recognized that receiving or transmitting data over a network ((Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362), as well as storing and retrieving information in memory are well‐understood, routine, and conventional functionalities (Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93)).
Therefore, based on the discussion of the additional elements above, claim 1 is not patent eligible.
Claim 2, dependent upon claim 1, further recites “…wherein operation of the generative model and the discriminative model in the frequency domain includes element-by-element multiplication operations”, which discloses an additional abstract idea of operations in frequency domain including multiplication operations. Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claim 3, dependent upon claim 1, further recites “…wherein operation of the generative model and the discriminative model in the frequency domain bypasses one or more convolution operations” which discloses an additional abstract idea of operations in frequency domain bypassing one or more convolution operations (e.g. the operations include at least one convolution operation). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claim 4, dependent upon claim 1, further recites “…wherein one or more of the generative model or the discriminative model include: an array of processing elements, a global instruction buffer coupled to the array of processing elements, wherein the global instruction buffer is to selectively issue single instruction multiple data (SIMD) instructions to columns in the array of processing elements, and a plurality of local instruction buffers coupled to the array of processing elements and the global instruction buffer, wherein the plurality of local instruction buffers are to selectively issue multiple instruction multiple data (MIMD) instructions to rows in the array of processing elements”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of SIMD/MIMD processing (MPEP 2106.05(h). The additional limitations also disclose issuing of instructions from a buffer which can be viewed as an insignificant extra-solution activity (e.g. data outputting) which is well-understood, routine and conventional (MPEP 2106.05(d and g) (Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claim 5, dependent upon claim 4, further recites “…wherein each processing element in the array of processing elements includes data access hardware to retrieve the input data and data processing hardware to process the retrieved input data, and wherein the data access hardware is separate from the data processing hardware”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of SIMD/MIMD processing (MPEP 2106.05(h). The additional limitations also disclose retrieving input data which can be viewed as an insignificant extra-solution activity (e.g. data gathering) which is well-understood, routine and conventional (MPEP 2106.05(d and g) (Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claim 6, dependent upon claim 5, further recites “…wherein each data processing hardware includes zero detection hardware to detect zero values in the input data”, which discloses an additional abstract idea of detecting zero values which can be an evaluation or determination (e.g. a mental process). The claim further includes additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claim 7, dependent upon claim 1, further recites “…further including a random number generator coupled to the generative model, wherein the random number generator is to insert zero values into an output to the generative model”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)). The additional limitations also disclose outputting zero values which can be viewed as an insignificant extra-solution activity (e.g. data outputting) which is well-understood, routine and conventional (MPEP 2106.05(d and g) (Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claim 8, dependent upon claim 1, further recites “…further including a loss function generator coupled to the discriminative model and the generative models”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claims 9 and 18 are similarly rejected on the same basis as claim 1 above.
13. Claims 10-17 and 19-24 are similarly rejected on the same basis as claims 2-8 above. (Note: Claim 17 includes an additional limitation not explicitly mirrored in claims 2-8, however the additional limitations merely recite generic computing components and thus fall under MPEP 2106.05(f). Thus, the additional limitations would not integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.)
Claim Rejections - 35 USC § 103
14. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
15. Claim(s) 1-5, 9-13 and 17-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over NPL reference, “FlexiGAN: An End-to-End Solution for FPGA Acceleration of Generative Adversarial Networks” hereby referred to as FlexiGAN, NPL reference “Design of an Energy-Efficient Accelerator for Training of Convolutional Neural Networks using Frequency-Domain”, hereby referred to as Frequency Domain and further in view of Kale, PGPUB No. 2022/0043696.
In regards to claim 1, FlexiGAN discloses a generative adversarial network (GAN) accelerator (see pages 65-68: wherein a FPGA accelerator for a GAN network is disclosed (See Figs. 2 and 5)) wherein the GAN accelerator includes logic, the logic including: a generative model, and a discriminative model (see pages 66-67 and section IV: wherein FPGA includes logic (e.g. JSON file information, Etc.) including generative and discriminative models (See Fig. 2)) wherein the generative model and the discriminative model are to operate in a domain. (See page 65, section I: wherein the generative and discriminative model operate in a domain. “GANs [1] automatically generate bigger and richer datasets from a small labeled set and have been proven to be effective in various domains…”)
FlexiGAN does not disclose A computing system comprising: a network controller to obtain input data; a network accelerator coupled to the network controller, wherein the network accelerator includes logic coupled to one or more substrates, the logic including: transformation hardware to convert the input data from a time domain into a frequency domain, a model coupled to the transformation hardware, wherein the model operates in the frequency domain.
Frequency Domain discloses network accelerator includes logic including: transformation hardware to convert the input data from a time domain into a frequency domain (page 4, section 5, and Fig. 11(a): wherein a hardware network accelerator includes an FFT/IFFT module that converts input data from time domain to frequency domain (also see Fig. 2(c)) a model coupled to the transformation hardware, wherein the model operates in the frequency domain. (page 1 and last page conclusion: wherein CNN model (implemented using complex multiplier/accumulator) is indirectly coupled to FFT/IFFT module and wherein CNN model operates in a frequency domain (See Figs. 2(c), and 11(a))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the GAN accelerator of FlexiGAN to perform in a frequency domain as the accelerator taught in Frequency Domain. It would have been obvious to one of ordinary skill in the art because operating a neural network model that performs convolutions in a frequency domain can improve speed, energy efficiency and reduce memory overhead in an accelerator (See Frequency Domain, page 2, section 1 and last page section 6).
The combination FlexiGAN and Frequency Domain does not disclose A computing system comprising: a network controller to obtain input data; a network accelerator coupled to the network controller, wherein the network accelerator includes logic coupled to one or more substrates.
Kale discloses A computing system ([0207-0208] and Figs. 12-13) comprising: a network controller to obtain input data ([0208-0210]: wherein network interface (element 341) reads input data for ANN (element 211)) a network accelerator coupled to the network controller ([0208-0210]: wherein deep learning accelerator (element 103) is indirectly coupled to network interface (element 341) (See Fig. 13)) wherein the network accelerator includes logic coupled to one or more substrates. ([0055-0056]: wherein deep learning accelerator includes logic coupled to one or more substrates (e.g. semiconductor substrate of CMOS or integrated circuit dies of FPGA))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the GAN accelerator of FlexiGAN and Frequency Domain to be a part of a system including a network controller and include substrate as the neural network accelerator of Kale. It would have been obvious to one of ordinary skill in the art because it would have been applying a known technique (using a network accelerator including one or more substrates in a system using a network controller to obtain inputs for the network accelerator as taught in Kale) to a known device (GAN accelerator of FlexiGAN and Frequency Domain) ready for improvement to yield predictable results (a system including a network controller and a GAN accelerator, including one or more substrates) for the benefit of using an accelerator in a network system environment as to improve networking operations and data communications across a network (also see Kale [0025]) (MPEP 2143, Example D).
Claim 9 is similarly rejected on the same basis as claim 1 above as claim 9 is the accelerator corresponding to the system of claim 1 above. (Note: claim 9 includes an additional limitation stating “wherein the logic is implemented at least partly in one or more of configurable or fixed-functionality hardware”. The references disclose “wherein the logic is implemented at least partly in one or more of configurable or fixed-functionality hardware” (FlexiGAN: pages 65-67: discloses an FPGA which includes configurable hardware| Frequency Domain: page 1, abstract and page 5, section 5.1: wherein ASIC and FPGA hardware is disclosed |Kale [0055-0056])
Claim 18 is similarly rejected on the same basis as claim 1 above as claim 18 is the method corresponding to the system of claim 1 above. (Note: claim 18 includes an additional limitation stating “supply the converted input data to a discriminative model”. The examiner asserts that the combination of FlexiGAN and Frequency Domain would disclose the above limitation (see FlexiGAN: pages 66-67 and section IV (See Fig. 2) |Frequency Domain: page 1 and last page conclusion (See Figs. 2(c) and 11(a))
In regards to claim 2, the combination of FlexiGAN, Frequency Domain and Kale discloses The computing system of claim 1 (see rejection of claim 1 above) wherein operation of the generative model and the discriminative model in the frequency domain includes element-by-element multiplication operations. (Frequency Domain: page 5, section 5.1: “As Figure 11(b) shows, the computation engine is designed to accumulate each output feature plane after element-wise multiplication” (also see abstract and section 6 conclusion: which disclose pointwise and element wise multiplications))
Claim 10 is similarly rejected on the same basis as claim 2 above as claim 10 is the accelerator corresponding to the system of claim 2 above.
Claim 19 is similarly rejected on the same basis as claim 2 above as claim 19 is the method corresponding to the system of claim 2 above.
In regards to claim 3, the combination of FlexiGAN, Frequency Domain and Kale discloses The computing system of claim 1 (see rejection of claim 1 above) wherein operation of the generative model and the discriminative model in the frequency domain bypasses one or more convolution operations. (Frequency Domain: see abstract and section 6 conclusion: which discloses replacing convolutions (e.g., bypassing convolutions) with element wise multiplications)
Claim 11 is similarly rejected on the same basis as claim 3 above as claim 11 is the accelerator corresponding to the system of claim 3 above.
Claim 20 is similarly rejected on the same basis as claim 3 above as claim 20 is the method corresponding to the system of claim 3 above.
In regards to claim 4, the combination of FlexiGAN, Frequency Domain and Kale discloses The computing system of claim 1 (see rejection of claim 1 above) wherein one or more of the generative model or the discriminative model include: an array of processing elements (FlexiGAN, page 68, See Fig. 5: wherein a GAN accelerator includes a plurality of compute engines) a global instruction buffer coupled to the array of processing elements, wherein the global instruction buffer is to selectively issue single instruction multiple data (SIMD) instructions to columns in the array of processing elements, and a plurality of local instruction buffers coupled to the array of processing elements and the global instruction buffer, wherein the plurality of local instruction buffers are to selectively issue multiple instruction multiple data (MIMD) instructions to rows in the array of processing elements. (FlexiGAN: see pages 68-69, section VI and Fig. 5)
Claim 12 is similarly rejected on the same basis as claim 4 above as claim 12 is the accelerator corresponding to the system of claim 4 above.
Claim 21 is similarly rejected on the same basis as claim 4 above as claim 21 is the method corresponding to the system of claim 4 above.
In regards to claim 5, the combination of FlexiGAN, Frequency Domain and Kale discloses The computing system of claim 4 (see rejection of claim 4 above) wherein each processing element in the array of processing elements includes data access hardware to retrieve the input data and data processing hardware to process the retrieved input data, and wherein the data access hardware is separate from the data processing hardware. (FlexiGAN, pages 68-69, See Figs. 5-6: wherein a GAN accelerator includes a plurality of compute engines each including separate data retrieval hardware and data processing hardware)
Claim 13 is similarly rejected on the same basis as claim 5 above as claim 13 is the accelerator corresponding to the system of claim 5 above.
Claim 22 is similarly rejected on the same basis as claim 5 above as claim 22 is the method corresponding to the system of claim 5 above.
In regards to claim 17, the combination of FlexiGAN, Frequency Domain and Kale discloses The GAN accelerator of claim 9 (see rejection of claim 9 above) wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates. (Kale [0055-0056]: wherein an accelerator logic includes CMOS which is a type of transistor (MOSFET) which would include a transistor channel region positioned within one or more substrates)
16. Claim(s) 6, 14 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over FlexiGAN, Frequency Domain, Kale and further in view of Desai, PGUB No. 2019/0041961.
In regards to claim 6, the combination of FlexiGAN, Frequency Domain and Kale discloses The computing system of claim 5 (see rejection of claim 5 above) wherein each data processing hardware (FlexiGAN: see pages 68-69, section VI and Fig. 5)
The combination of FlexiGAN, Frequency Domain and Kale does not disclose zero detection hardware to detect zero values in the input data.
Desai discloses zero detection hardware to detect zero values in the input data. ([0225 and 0230])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the compute engines of FlexiGAN to include compute hardware to detect zero values in input values as taught in Desai. It would have been obvious to one of ordinary skill in the art because detecting zero values can be used to clock gate zero activations and save compute power in neural network architectures (Desai [0230]).
Claim 14 is similarly rejected on the same basis as claim 6 above as claim 14 is the accelerator corresponding to the system of claim 6 above.
Claim 23 is similarly rejected on the same basis as claim 6 above as claim 23 is the method corresponding to the system of claim 6 above.
17. Claim(s) 7, 15 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over FlexiGAN, Frequency Domain, Kale and further in view of Hegde, PGUB No. 2021/0103822.
In regards to claim 7, the combination of FlexiGAN, Frequency Domain and Kale discloses The computing system of claim 1 (see rejection of claim 1 above) insert zero values into an output to the generative model (FlexiGAN, pages 66-68: “… The primary operation in generative models (transposed convolution or TranConv) fundamentally differs from the one in discriminative models (convolution or Conv). The Conv operation shrinks the input while TranConv expands it by first inserting zeros within its rows and columns (see Fig. 3a))
The combination of FlexiGAN, Frequency Domain and Kale does not explicitly disclose further including a random number generator coupled to the generative model, wherein the random number generator is to insert zero values into an output to the generative model. While, FlexiGAN discloses inserting zeroes into an output of a generative model, it does not disclose using a random number generator to insert zeroes into the output.
Hedge discloses further including a random number generator coupled to the generative model, wherein the random number generator is to insert zero values into an output to the generative model. ([0067, 0071 and 0078]: wherein latent space vector generator is generating random numbers including zeroes, and the generator is coupled to a generator model as to insert random zeroes (See Figs. 2-3))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the insertion of zeroes into a generative model of a generative adversarial network as taught in FlexiGAN to be performed by a random number generator as taught in the generative adversarial network of Hedge. It would have been obvious to one of ordinary skill in the art because it would have been the simple substitution of one known element (using a random number generator to insert zeroes into a generative model as taught in Hedge) for another (generically inserting zeroes into a generative model as taught in FlexiGAN) to yield predictable results (using a random number generator to insert zeroes into an output of a generative model) (MPEP 2143, Example B). In addition, using a random number generator to insert values into a generative model allows the generator to produce novel/realistic data which would improve the robustness of generative adversarial networks.
Claim 15 is similarly rejected on the same basis as claim 7 above as claim 15 is the accelerator corresponding to the system of claim 7 above.
Claim 24 is similarly rejected on the same basis as claim 7 above as claim 24 is the method corresponding to the system of claim 7 above.
18. Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over FlexiGAN, Frequency Domain, Kale and further in view of Jin, PGPUB No. 2021/0343305.
In regards to claim 8, the combination of FlexiGAN, Frequency Domain and Kale discloses The computing system of claim 1 (see rejection of claim 1 above).
The combination of FlexiGAN, Frequency Domain and Kale does not disclose further including a loss function generator coupled to the discriminative model and the generative model.
Jin discloses further including a loss function generator coupled to the discriminative model and the generative model. (See Fig. 3: wherein a loss function generator (element 330) is coupled to a discriminator model (element 320) and a generative model (element 230))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the generative adversarial network as taught in FlexiGAN to be include a loss function generator as the generative adversarial network taught in Jin. It would have been obvious to one of ordinary skill in the art because using a loss function allows updates to the generative and discriminator models to improve performance of the models and reduce errors in a GANs (Jin [0044]).
Claim 16 is similarly rejected on the same basis as claim 8 above as claim 16 is the accelerator corresponding to the system of claim 8 above.
Conclusion
19. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
NPL reference, “Reconfigurable and Low-Complexity Accelerator for Convolutional and Generative Networks Over Finite Fields” for teaching an accelerator for a GAN that operates in a real finite field using a fast fermat number transform that performs element wise multiplication
NPL reference “GANAX: A Unified MIMD-SIMD Acceleration for
Generative Adversarial Networks" for teaching an accelerator for a GAN which uses processing elements that operate in a SIMD or MIMD mode
20. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY P SPANN whose telephone number is (571)431-0692. The examiner can normally be reached M-F, 9am-6pm, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COURTNEY P SPANN/Primary Examiner, Art Unit 2183