DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-19 are presented for examination in this application. The application filing date on 03/06/2023. Claims 1 and 18 are independent.
Examiner notes
(A). Drawings submitted on 05/24/2023 comply with the provisions of 37 CFR 1.121(d),
(B). Limitations have been provided with the Bold fonts in order to distinguish from the cited part of the reference (Italic).
(C). Examiner has cited particular columns, line numbers, references, or figures in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses to fully consider the reference in entirety, as potentially teaching all or part of the claimed invention. See MPEP § 2141.02 VI and 2123.
The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111 (c).
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Chinese patent application no. 2022102104423 on 04/03/2022. Examiner further acknowledged that receiving electronics copy of Chinese patent application. Accordingly, the foreign priority filing date is being considered by the examiner.
CONTINGENT LIMITATIONS
Claims 1, 3-8, 13 and 15 (method claims) interpreted based on Ex parte Schulhauser. See MPEP 2111.04 ll.
The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. For example, assume a method claim requires step A if a first condition happens and step B if a second condition happens. If the claimed invention may be practiced without either the first or second condition happening, then neither step A nor B is required by the broadest reasonable interpretation of the claim. If the claimed invention requires the first condition to occur, then the broadest reasonable interpretation of the claim requires step A. If the claimed invention requires both the first and second conditions to occur, then the broadest reasonable interpretation of the claim requires both steps A and B.
The broadest reasonable interpretation of a system (or apparatus or product) claim having structure that performs a function, which only needs to occur if a condition precedent is met, requires structure for performing the function should the condition occur. The system claim interpretation differs from a method claim interpretation because the claimed structure must be present in the system regardless of whether the condition is met and the function is actually performed.
See Ex parte Schulhauser, Appeal 2013-007847 (PTAB April 28, 2016) for an analysis of contingent claim limitations in the context of both method claims and system claims. In Schulhauser, both method claims and system claims recited the same contingent step. When analyzing the claimed method as a whole, the PTAB determined that giving the claim its broadest reasonable interpretation, "[i]f the condition for performing a contingent step is not satisfied, the performance recited by the step need not be carried out in order for the claimed method to be performed" (quotation omitted). Schulhauser at 10. When analyzing the claimed system as a whole, the PTAB determined that "[t]he broadest reasonable interpretation of a system claim having structure that performs a function, which only needs to occur if a condition precedent is met, still requires structure for performing the function should the condition occur." Schulhauser at 14. Therefore "[t]he Examiner did not need to present evidence of the obviousness of the [ ] method steps of claim 1 that are not required to be performed under a broadest reasonable interpretation of the claim (e.g., instances in which the electrocardiac signal data is not within the threshold electrocardiac criteria such that the condition precedent for the determining step and the remaining steps of claim 1 has not been met);" however to render the claimed system obvious, the prior art must teach the structure that performs the function of the contingent step along with the other recited claim limitations. Schulhauser at 9, 14.Noting that no claim may be read apart from and independent of the supporting disclosure on which it is based, the court found that the claim was internally inconsistent based on the description, definitions and examples set forth in the specification relating to the appearance of the surface after treatment, and therefore indefinite. Id. (Cohn, 438 F.2d at 993).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
The “a buffer-information acquisition module, for obtaining …” and “an operator splitting and memory configuration module … .” in claim 18;
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 18 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
INSUFFICIENT CORRESPONDING STUCTURE
As to claim 18, various limitations of these claims invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph as noted above. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed functions of limitations identified above. The claim is therefore indefinite.
Note that for device for splitting operation limitations, the corresponding structure includes an algorithm for performing the entire claimed functions. See M.P.E.P § 2181(II)(B). And the specification here discloses no algorithms for performing the entire claimed functions. It does little more than repeat the claim language. For the purposes of examination, the limitations will simply be interpreted in accordance with the broadest reasonable interpretation in light of the specification.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 18 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As to claim 18, the claim invoke interpretation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112 without sufficient corresponding structure in the specification as set forth above. Such claims also lack written description. See M.P.E.P. § 2163.063(V).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-2, 9 and 17-19 are rejected under 35 U.S.C. 103 as being obvious over Gou et al. (CN 113703775 A, hereinafter Gou) in view of Hinds et al. (US 20210011638 A1, hereinafter Hinds).
As to claim 1, Gou discloses a method for splitting operators, wherein the method is applied to a compilation stage of an artificial intelligence hardware accelerator, the artificial intelligence hardware accelerator comprises a first memory, and the method comprises:
S1: obtaining buffer information required by target operators (page 11, … according to the memory size of the memory size of the target chip and the original calculation data [i.e. target operation] corresponding to each operator in the memory size to adjust the operator in the original calculated graph, so as to obtain the final calculation graph, can be for each operator in the original calculation graph, respectively the following operations: … a certain value obtained by the buffer value. … ); and
S2: splitting the target operators to obtain a splitting result of the target operators (page 11, … the operation data [i.e. result] of each operator is divided into a plurality of data, can according to the type of the operator and the hardware performance parameter of the target chip to determine the resolution dimension of the operation data to be split, then splitting the operation data in the determined resolution dimension to obtain a plurality of data. … ) and
obtaining a storage layout of the target operators in the first memory (page11, … different topology sequence, the execution efficiency on the target chip may not be the same, so it can select target chip execution efficiency of the target topology [i.e. layout] sequence from a plurality of topology sequence, for example, can be combined with the hardware performance parameter of the target chip, such as the type and number of the calculating unit in the target chip; calculating capacity of the calculating unit, or the size of the memory … ), based on the buffer information required by the target operators (page 11, … the memory size of the target chip, or the memory size of the target chip minus a certain value obtained by the buffer value. for each operator in the original calculation diagram) and a storage capacity of the first memory (page 10, … the storage device; the calculation unit in the target chip calculates the calculation time of the input data; and some waiting time in the operation process (generally negligioless). the time of reading the instruction from the storage device can be determined according to the transmission bandwidth of the port of the target chip and the transmission data quantity; the time of calculating the data can be determined according to the calculated data quantity and the calculation capability of the calculation unit of the target chip; therefore, the operation logic of the cost model is: according to the hardware performance parameter of the target chip (such as, the target chip comprises several types of calculating unit, the number of each type of calculating unit, calculating capacity of each type of calculating unit … . Note: );
wherein the splitting result of the target operators (page 12, … if the operation data of the operator (i.e., input output tensor) occupied space exceeds the set size (set based on target chip memory size), then the operation data of the operator is split, split into multiple parts of data, and adding one or more same operator in the calculated graph; so that each part of data after splitting is corresponding to one operator, and the operation data corresponding to each operator can be finished on the target chip for one time … );
Gou does not explicitly disclose the following limitations but,
Hinds discloses the storage layout of the target operators are used to implement a mapping of a target artificial intelligence model to the artificial intelligence hardware accelerator (par. 0069, … the coprocessor or hardware accelerator could be a graphics processing unit, a floating-point processing unit dedicated to floating-point operations, a vector processing unit dedicated to performing vector operations, or hardware accelerators for specific tasks such as cryptographic operations, digital signal processing, artificial intelligence, … . Further, par. 0074, … data allocated to an address mapped to the scratchpad memory will remain stored in the scratchpad memory until explicitly overwritten by the processor, rather than being replaced dynamically in a cache based on usage recency or some other cache allocation policy. Also, the primary storage units may include predictive storage structures for storing prediction state used to predict aspects of program execution [i.e. target operation] (such as branch outcomes, branch target addresses … ).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by xx to include the storage layout of the target operators are used to implement a mapping of a target artificial intelligence model to the artificial intelligence hardware accelerator, that can be used for selection of communication links, as disclosed by Hinds for the purpose of performing vector operations, or hardware accelerators for specific tasks (see paragraph 0069 of Hinds).
As to claim 2, Gou discloses the method for splitting operators wherein S2 further comprises:
521: splitting data to be split of the target operators in one or more target dimensions so as to obtain a splitting result of the data to be split (page 11, … the operation data of the operator (i.e., input output tensor) occupied space exceeds the set size (set based on target chip memory size), then the operation data of the operator is split, split into multiple parts of data, and adding one or more same operator in the calculated graph; so that each part of data after splitting is corresponding to one operator, and the operation data corresponding to each operator can be finished on the target chip for one time. aiming at each operator in the calculation graph, executing the same operation, … ); and
522: obtaining the storage layout of the target operators in the first memory based on the splitting result of the data to be split (page11, … different topology [i.e. layout]sequence, the execution efficiency on the target chip may not be the same, so it can select target chip execution efficiency of the target topology sequence from a plurality of topology sequence, for example, can be combined with the hardware performance parameter of the target chip, such as the type and number of the calculating unit in the target chip; calculating capacity of the calculating unit, or the size of the memory … ).
As to claim 9, Hinds discloses the method for splitting operators wherein the artificial intelligence hardware accelerator further comprises a second memory (Fig. 1, element 2, 26, 28, further par. 0069, … hardware accelerators for specific tasks such as cryptographic operations, digital signal processing, artificial intelligence … . Further, par. 0072, … data processing system 2 which has a number of processing elements, including main processing elements (CPUs) 4 and hardware accelerators/co-processors 12. The processing elements 4, 12 execute data processing in response to instructions. The CPUs 4 include a number of internal primary storage units including a register file 6 for storing architectural state, an instruction cache 8 for storing instructions fetched upon a memory system and a data cache 10 for storing data from the memory system. … ).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by xx to include the method for splitting operators wherein the artificial intelligence hardware accelerator further comprises a second memory, as disclosed by Hinds for the purpose of providing opportunities for more frequent power savings than would be possible if primary storage was implemented using volatile storage. (see abstract of Hinds).
As to claim 18, Gou discloses a device for splitting operators, wherein the device for splitting operators is applied to a compilation stage of an artificial intelligence hardware accelerator, the artificial intelligence hardware accelerator comprises a first memory, and the device for splitting operators comprises:
a buffer-information acquisition module, for obtaining buffer information required by target operators (page 11, when determining the to-be-compiled neural network corresponding to the calculated graph, analyzing the neural network to obtain the original calculation graph corresponding to the neural network. Because the memory of the target chip is not the same, for some operators in the original calculation diagram, the memory of the target chip may be too small, cannot store the operator corresponding to the operation data, … according to the memory size of the memory size of the target chip and the original calculation data [i.e. target operation] corresponding to each operator in the memory size to adjust the operator in the original calculated graph, so as to obtain the final calculation graph, can be for each operator in the original calculation graph, respectively the following operations: … a certain value obtained by the buffer value. … )
an operator splitting and memory configuration module (page. 4, under the condition that the memory size corresponding to the operator corresponding to the operator is greater than the preset threshold value, adding at least one operator of the same type with the operator in the original calculation graph, so as to split the operation data into multiple parts of data and respectively adding a plurality of operator operations after adding; wherein the preset threshold is determined based on the memory size of the target chip; …), for splitting the target operators to obtain a splitting result of the target operators (page 12, … if the operation data of the operator (i.e., input output tensor) occupied space exceeds the set size (set based on target chip memory size), then the operation data of the operator is split, split into multiple parts of data, and adding one or more same operator in the calculated graph; so that each part of data after splitting is corresponding to one operator, and the operation data corresponding to each operator can be finished on the target chip for one time … ); and
obtaining a storage layout of the target operators in the first memory (page11, … different topology sequence, the execution efficiency on the target chip may not be the same, so it can select target chip execution efficiency of the target topology [i.e. layout] sequence from a plurality of topology sequence, for example, can be combined with the hardware performance parameter of the target chip, such as the type and number of the calculating unit in the target chip; calculating capacity of the calculating unit, or the size of the memory … ), based on the buffer information required by the target operators (page 11, … the memory size of the target chip, or the memory size of the target chip minus a certain value obtained by the buffer value. for each operator in the original calculation diagram);
wherein the splitting result of the target operators (page 12, … if the operation data of the operator (i.e., input output tensor) occupied space exceeds the set size (set based on target chip memory size), then the operation data of the operator is split, split into multiple parts of data, and adding one or more same operator in the calculated graph; so that each part of data after splitting is corresponding to one operator, and the operation data corresponding to each operator can be finished on the target chip for one time … );
Gou does not explicitly disclose the following limitations but,
Hinds discloses and the storage layout of the target operators are used to implement a mapping of a target artificial intelligence model to the artificial intelligence hardware accelerator (par. 0069, … the coprocessor or hardware accelerator could be a graphics processing unit, a floating-point processing unit dedicated to floating-point operations, a vector processing unit dedicated to performing vector operations, or hardware accelerators for specific tasks such as cryptographic operations, digital signal processing, artificial intelligence, … . Further, par. 0074, … data allocated to an address mapped to the scratchpad memory will remain stored in the scratchpad memory until explicitly overwritten by the processor, rather than being replaced dynamically in a cache based on usage recency or some other cache allocation policy. Also, the primary storage units may include predictive storage structures for storing prediction state used to predict aspects of program execution [i.e. target operation] (such as branch outcomes, branch target addresses … ).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by xx to include the storage layout of the target operators are used to implement a mapping of a target artificial intelligence model to the artificial intelligence hardware accelerator, that can be used for selection of communication links, as disclosed by Hinds for the purpose of performing vector operations, or hardware accelerators for specific tasks (see paragraph 0069 of Hinds).
As to claim 19, Gou-Hinds discloses a non-transitory computer readable storage medium, wherein at least one computer program is stored on the non-transitory computer readable storage medium (Gou page 17, … a computer readable medium does not include
a temporary computer readable medium (transitory media), such as modulated data signal and carrier), and
For remaining limitations see remarks regarding claim 1.
Claims 10 is rejected under 35 U.S.C. 103 as being obvious over Gou et al. and in view Hinds, as applied to claim 9 in the above and further in view of Yoon et al. (US 20200117999 A1, hereinafter Yoon).
As to claim 10, Gou as modified by Hinds does not explicitly disclose the following limitations, but,
Yoon discloses the method for splitting operators wherein the method further comprises:
determining whether the output data of the target operators needs to be moved to the second memory (par. 0008, … In response to the determining, the updated machine learning model can include first control data that causes the machine learning processor to store the output data for the first operation in the first memory after the output data is generated by the first operation, and second control data that causes the machine learning processor to transfer the output data from the first memory to the second memory prior to the output data being used as input to the second operation. … ).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the system disclosed by xx to include determining whether the output data of the target operators needs to be moved to the second memory, as disclosed by Yoon for the purpose of determining that output data for a first operation is to be stored in a first memory of the multiple memories based on when the output data for the first operation will be used as input by a second operation. (see par. 0009 of Yoon).
Objected claims
Claims 3-8 and 11-16 are objected to as being dependent upon rejected independent claim 1, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims or amended to overcome the contingent, set forth in this Office action.
Conclusion
Prior arts made of record are considered pertinent to applicant's disclosure. See MPEP § 707.05 (C) For Examples:
I. Li et al. (CN 113705785 A) discloses: “According to another aspect of the present disclosure, there is provided a processor, the processor comprises a plurality of function cores, the plurality of function cores comprises a target function core for performing convolution processing, the storage space of the target function core caches the processing sub-data and weight data of the convolution processing; the processing sub-data is obtained by splitting the processing data of the convolution processing; each data in the processing sub-data has a first data identification; for any one convolution operation of any target function core, the target function core is used for according to convolution kernel size and convolution step length of the convolution processing; determining the second data identification of each data in the operation data corresponding to the convolution operation; when there is a second data identification exceeding the first data identification, reading the operation data corresponding to the second data identification not exceeding the first data identification from the storage space, and setting the operation data corresponding to the second data identification exceeding the first data identification to be zero; according to the operation data corresponding to the convolution operation and the weight data, performing convolution operation to obtain the operation result of the convolution operation.…” (please see [0004]).
II. Li et al. (CN 113672172 A) discloses: “The beneficial effects of the present invention are as follows: Different from the existing technology, the invention claims a data interaction method and a receiving card applied to LED display control system. Compared with the existing LED display control system, the upper computer usually splits the data to be interactive based on the buffer size of the receiving card, because the buffer of the receiving card is usually small, resulting in split data message number is more, greatly increasing the interaction times of the data message, resulting in a lower data interaction efficiency. and the upper computer in the invention splits the data to be interactive into several data messages based on the storage capacity of the target storage area, wherein the target storage area on the receiving card memory allows configuring a larger storage capacity, which means that the invention can reduce the number of data messages obtained by splitting, and reduce the interaction times of the data messages; so as to improve the data interaction efficiency, so as to improve the response speed of the client operation and improve the experience degree of the user.” (please see [page 4]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD H KABIR whose telephone number is (571)270-1341. The examiner can normally be reached M-F, 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sam Sough can be reached at 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Mohammad Kabir/
Examiner, Art Unit 2192
/S.SOUGH
spe, art unit 2192