Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/2025 has been entered.
Response to Amendments
Claims 1-3, 11-13, and 19-20 have been amended.
Claims 1-3, 5-13, and 15-20 remain pending in the application.
The amendment filed 11/24/2025 is sufficient to overcome the 35 U.S.C. 101 rejections of claims 1-3, 5-13, and 15-20. The previous rejections have been withdrawn.
The amendment filed 11/24/2025 is sufficient to overcome the 35 U.S.C. 103 rejections of claims 1-3, 5-7, 11-13, 15-17, and 19-20 over Kuo in view of Saniee, the 35 U.S.C. 103 rejections of claims 8 and 18 over Kuo in view of Saniee and further in view of Zhang and the 35 U.S.C. rejections of claims 9 and 10 over Kuo in view of Saniee and further in view of Wan. The previous rejections have been withdrawn.
Response to Arguments
Argument 1, regarding the 101 rejections, applicant argues that the 101 rejections should be withdrawn because the claims integrate the judicial exceptions into the practical application of an improvement in the technical field of training neural networks. Examiner agrees and the 101 rejections have been withdrawn.
Argument 2, regarding the prior art rejections, applicant argues that none of the cited prior art teaches “performing a plurality of iterations of supervised co-training of the full-sized network and a respective subset of the plurality of sub-networks, and excluding a smallest sub-network in size from the respective subset of the plurality of sub-networks in an iteration of the plurality of iterations”. Applicant argues that the cited teaching of Kuo only recites training a huge number of sub-networks, and not the whole network. Examiner notes this argument is moot because the cited portion of Kuo is directed towards the teaching of performing a plurality of iterations of supervised co-training of a respective subset of the plurality of sub-networks, and not directed towards the training of the full-sized network. Kuo teaches performing a plurality of iterations of supervised co-training of a respective subset of the plurality of sub-networks (in multiple iterations, sub-networks are trained in the same training process (co-trained), C20:L58-66. The learning may be supervised learning, C13:L17-24).
Applicant also argues that the “whole network” taught in Kuo may not reasonably be interpreted as a full-sized network because the whole network is the final network of the training process. As argued above, Kuo is cited to teach performing a plurality of iterations of supervised co-training of a respective subset of the plurality of sub-networks. As seen below, independent claims 1, 11, and 19 are rejected under 35 U.S.C. 112(b) because the term “full-sized network” is relative. In the interest of compact prosecution, if it were interpreted to mean a network with each subnetwork connected, the teachings of Chen et al (Pub. No.: US 20240283499 A1) meet the limitation. Examiner notes that applicant’s argument is moot over the rejection in view of Chen. Chen teaches performing a plurality of iterations of supervised co-training of the full-sized network (precoding matrix determination network consists of four different sub networks, P0104. Subnetworks may be trained independently before the entire precoding matrix determination network is trained, P0118. Training of the precoding matrix determination network may be completed over a number of iterations, P0105).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Kuo and Chen before them, to include Chen’s specific teaching of training an entire network and its sub-networks as a whole in Kuo’s system of artificial neural network configuration. One would have been motivated to make such a combination of training an entire network and its sub-networks as a whole (see Chen P0104-P0105, P0118), and training a bulk of sub-networks of a network to create a fully nested final network (see Kuo C20:L53-66).
Applicant also argues that Saniee does not teach “excluding a smallest sub-network in size from the respective subset of the plurality of sub-networks in an iteration of the plurality of iterations” and instead teaches a method for pruning sub-networks based on connections with the DNN. Applicant's argument has been fully considered but is not persuasive. Applicant does not explain why the cited portion of Saniee does not teach “excluding a smallest sub-network in size from the respective subset of the plurality of sub-networks in an iteration of the plurality of iterations” beyond stating that the limitation is not taught by Saniee.
Saniee teaches removing or masking connections of the subnetwork with the smallest weights. Saniee teaches excluding a smallest sub-network in size from the respective subset of the plurality of sub-networks in an iteration of the plurality of iterations (connections of a subnetwork with the smallest magnitude weights may be removed or masked, P0062, P0073. This is an iterative process, P0026, P0083). Under the broadest reasonable interpretation, removing or masking connections of a subnetwork with the smallest weights is interpreted as “excluding a smallest sub-network in size from the respective subset of the plurality of sub-networks in an iteration of the plurality of iterations”.
The full prior art rejections are outlined below.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means,” and are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Such claim limitation(s) is/are: “training a full-sized network and a plurality of sub-networks, the training comprising performing at least one epoch of training the full- sized network, and performing a plurality of iterations of supervised co- training of the full-sized network and a respective subset of the plurality of sub-networks…while iteratively decreasing a size of the subset of the sub-networks by removing a smallest sub-network from each subset of the sub-networks; and selecting a sub-network from among co-trained sub-networks based on a hardware constraint” in claim 19.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 19 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification does not disclose sufficient corresponding structure for the claimed functions of training a full-sized network and plurality of sub-networks (see MPEP 2181 (IV)). Thus, a person of ordinary skill in the art cannot determine how to perform the claimed functions, and the specification fails to demonstrate that the inventor was in possession of the claimed invention at the time of filing. Claim 20 incorporates by reference all limitations of claim 19 and is rejected under 35 U.S.C. 112(a) for similar reasons.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ),
second paragraph.
The term “full-sized network” in claims 1, 11, and 19 is a relative term which renders the claim indefinite. The term “full-sized network” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Dependent claims 2-10 and 12-20 inherit the deficiency and are rejected for the
same rationale. Appropriate action is required.
Claim limitation “training a full-sized network and a plurality of sub-networks, the training comprising performing at least one epoch of training the full- sized network, and performing a plurality of iterations of supervised co- training of the full-sized network and a respective subset of the plurality of sub-networks…while iteratively decreasing a size of the subset of the sub-networks by removing a smallest sub-network from each subset of the sub-networks; and selecting a sub-network from among co-trained sub-networks based on a hardware constraint” in claim 19 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. No association between the structure and the functions can be found in the specification. The specification fails to clearly link the claimed functions to disclosed structures, materials, or acts (see MPEP 2181 (III)). Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Claim 20 incorporates by reference all
limitations of claim 19 and is rejected under 35 U.S.C. 112(b) for similar reasons.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-7, 11-13, 15-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kuo et al (Pub. No.: US 12093817 B2), hereafter Kuo in view of Chen et al (Pub. No.: US 20240283499 A1), hereafter Chen and Saniee et al (Pub. No.: US 20220318631 A1), hereafter Saniee.
Regarding claims 1, 11, and 19, Kuo teaches co-training the full-sized network and a plurality of sub-networks, wherein the co-training of the full-sized network and the plurality of sub-networks comprises: performing a plurality of iterations of supervised co-training of … a respective subset of the plurality of sub-networks (in multiple iterations, sub-networks are trained in the same training process (co-trained), C20:L58-66. The learning may be supervised learning, C13:L17-24)… selecting a sub-network from among co-trained sub-networks based on a hardware constraint (heuristic search algorithm is used to search for an optimal subnetwork from the subnetworks based on a given resource configuration including number of cores and size of memory, C21:L3-8).
Kuo does not appear to explicitly teach “training a full-sized network, wherein the training of the full-sized network comprises performing at least one epoch of training the full-sized network;…performing a plurality of iterations of supervised co-training of the full-sized network”.
Chen teaches training a full-sized network, wherein the training of the full-sized network comprises performing at least one epoch of training the full-sized network;…performing a plurality of iterations of supervised co-training of the full-sized network (precoding matrix determination network consists of four different sub networks, P0104. Subnetworks may be trained independently before the entire precoding matrix determination network is trained, P0118. Training of the precoding matrix determination network may be completed over a number of iterations, P0105).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Kuo and Chen before them, to include Chen’s specific teaching of training an entire network and its sub-networks as a whole in Kuo’s system of artificial neural network configuration. One would have been motivated to make such a combination of training an entire network and its sub-networks as a whole (see Chen P0104-P0105, P0118), and training a bulk of sub-networks of a network to create a fully nested final network (see Kuo C20:L53-66).
Kuo in view of Chen does not appear to explicitly teach “excluding a smallest sub-network in size from the respective subset of the plurality of sub-networks in an iteration of the plurality of iterations”.
Saniee teaches excluding a smallest sub-network in size from the respective subset of the plurality of sub-networks in an iteration of the plurality of iterations (connections of a subnetwork with the smallest magnitude weights may be removed or masked, P0062, P0073. This is an iterative process, P0026, P0083).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Kuo, Chen, and Saniee before them, to include Saniee’s specific teaching of removing or masking connections of a subnetwork with the smallest magnitude weights in Kuo’s system of artificial neural network configuration. One would have been motivated to make such a combination of removing or masking connections of a subnetwork with the smallest magnitude weights (see Saniee P0026, P0062, P0073, P0083), and using a heuristic search algorithm to determine components that should be removed from a neural network (see Kuo C6:L44-49).
Regarding claim 19, this element invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The elements are interpreted under 35 U.S.C. 112(f) as processor(s) with the algorithm described in the specification that causes the processor(s) to perform the claimed functions.
(the algorithms for: training a full-sized network and a plurality of sub-networks, the training comprising performing at least one epoch of training the full- sized network, and performing a plurality of iterations of supervised co- training of the full-sized network and a respective subset of the plurality of sub-networks…while iteratively decreasing a size of the subset of the sub-networks by removing a smallest sub-network from each subset of the sub-networks; and selecting a sub-network from among co-trained sub-networks based on a hardware constraint).
Regarding claims 2, 12, and 20, Kuo in view of Chen and Saniee teaches the limitations of claims 1, 11, and 19 as outlined above. Kuo further teaches wherein the co-training of the full-sized network and the respective subset of the sub-networks comprises maximizing the full-sized network only with respect to ground truth labels (each block of the full network is maximized with respect to the ground truth by analyzing errors between the ground truth and prior classifiers. Blocks with larger errors compared to the ground truth are dropped, maximizing the full network, C10:L12-33).
Regarding claims 3 and 13, Kuo in view of Chen and Saniee teaches the limitations of claims 1 and 11 as outlined above. Kuo further teaches wherein the co-training of the full-sized network and the respective subset of the sub-networks comprises maximizing the sub-networks only with respect to output of the full-sized network (Only accurate sub-networks remain after less accurate sub-networks are removed, and this is gauged by the performance of the whole network, C18:L32-46).
Regarding claims 5 and 15, Kuo in view of Chen and Saniee teaches the limitations of claims 1 and 11 as outlined above. Kuo further teaches wherein, for each iteration, each subset of the sub- networks is selected at random (“the training operation includes randomly selecting nested sub-networks during training batches”, C3:L33-35).
Regarding claims 6 and 16, Kuo in view of Chen and Saniee teaches the limitations of claims 1 and 11 as outlined above. Kuo further teaches performing of the at least one epoch of training of the full-sized network is without performing co-training with the sub-networks and is before the performing of the plurality of iterations of supervised co-training (ordered dropout, the training of sub-networks within the full network after removing certain sub-networks is compared to the training of the full network. This comparison cannot be made without a first epoch of training the full network before, C19:L5-14 figure 6).
Regarding claims 7 and 17, Kuo in view of Chen and Saniee teaches the limitations of claims 1 and 11 as outlined above. Kuo further teaches wherein each of the sub-networks has a channel expansion ratio selected from the group consisting of 3, 4, and 6 (channel expansion ratio is interpreted as width in view of P0044 of spec of instant application. Width of the sub-networks may be 3 or 4 bits, C19:L63-67, C20:L1-9).
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kuo in view of Chen and Saniee and further in view of Zhang et al (Pub. No.: CN 112700786 B), hereafter Zhang.
Regarding claims 8 and 18, Kuo in view of Chen and Saniee teaches the limitations of claims 1 and 11 as outlined above. Kuo does not appear to explicitly teach wherein each of the sub-networks has a depth selected from the group consisting of 2, 3, and 4.
Zhang teaches wherein each of the sub-networks has a depth selected from the group consisting of 2, 3, and 4 (depth is interpreted as number of layers in view of P0044 of spec of instant application. Zhang teaches each subnetwork has 3 layers. Page 13, full paragraph 3).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Kuo, Chen, Saniee and Zhang before them, to include Zhang’s specific teaching of each subnetwork having 3 layers in Kuo’s system of Artificial Neural Network Configuration And Deployment. One would have been motivated to make such a combination of each subnetwork having 3 layers and neural networks having at least 2 layers to include different number of building blocks, and the sliding windows for the layers are configured to cover the same proportion of building blocks per layer (see Kuo C6:L11-16).
Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kuo in view Chen and Saniee and further in view of Wan et al (Pub. No: CN 111798469 A), hereafter Wan.
Regarding claim 9, Kuo in view of Chen and Saniee teaches the limitations of claim 1 as outlined above. Kuo does not appear to explicitly teach wherein each of the sub-networks consists of five blocks.
Wan teaches wherein each of the sub-networks consists of five blocks (sub network comprises 5 convolution blocks. Page 3, paragraph 8).
Accordingly, it would have been obvious to a person having ordinary skill in the
art before the effective filing date of the claimed invention, having the teachings of
Kuo, Chen, Saniee and Wan before them, to include Wan’s specific teaching of each subnetwork having 5 blocks in Kuo’s system of Artificial Neural Network Configuration And Deployment. One would have been motivated to make such a combination of each subnetwork having 5 blocks and each subnetwork including blocks for specific functions for respective resource configuration (see Kuo C3:L19-22).
Regarding claim 10, Kuo in view of Chen and Saniee and further in view of Wan teaches the limitations of claim 9 as outlined above. Wan further teaches wherein the five blocks have respective kernel sizes of 3, 5,3, 3, and 5 (In view of P0045-P0046 of the instant application, the kernel sizes of blocks 1, 3, and 4 may be fixed to 3 and blocks 2 and 5 may be fixed to 5. It is not clear in either the specification or in the claim why these kernel sizes in particular are chosen for the specific blocks. Examiner interprets this limitation to mean kernel sizes should be selected from the group consisting of 3 and 5 for each of the 5 blocks. Wan teaches the kernel size of each of the 5 blocks is 3. Page 3, paragraph 8).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHAN MOUNDI whose telephone number is (703)756-1547. The examiner can normally be reached 8:30 A.M. - 5 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.N.M./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141