DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted from 06/27/2023 to 02/02/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the following must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
A. a crossbar array of memristors configured to perform multiplication of a matrix of weights by an array of inputs based on the memristors having resistance values being programmed according to the weights, rows of the crossbar array being applied voltages with magnitudes according to the inputs, and currents generated by the voltages as applied to columns of the crossbar array being summed in connections for the columns respectively as specified in claim 3.
B. an array of memory cells configured to perform multiplication of a matrix of weights by an array of inputs based on the memory cells being programmed to store bits of binary representation of the weights, rows of the memory cells being applied or not applied a predetermined read voltage according to bits of binary representation of the inputs, columns of the memory cells being connected to lines for the columns respectively to sum currents going through the columns of the memory cells, and currents in the lines being digitized for shift and summation in logic circuits as specified in claim 4.
C. an array of logic circuits configured to perform a plurality of multiplications in parallel as specified in claim 5.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claims 19-20 are objected to under 37 C.F.R. 1.71(a) which requires “full, clear, concise, and exact terms” as to enable any person skilled in the art or science to which the invention or discovery appertains, or with which it is most nearly connected, to make and use the same. The following should be corrected.
A. In claim 19 line 3, “storage capacity” should read “the storage capacity” instead because storage capacity is already introduced in claim 18 from which the claim depends. Claim 20 inherit the same deficiency as claim 19 by reason of dependence.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 14-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 14 recites “the remote device” in lines 3-4 and 6. There is insufficient antecedent basis for this limitation in the claim. A remote device in claim 13 was deleted in the amendment. For purposes of examination, the first recitation in lines 3-4 is interpreted as a remote device.
Claim 15 recites “the remote device” in line 4. There is insufficient antecedent basis for this limitation in the claim. A remote device in claim 13 was deleted in the amendment. For purposes of examination, this is interpreted as a remote device. Claims 16-17 inherit the same deficiency as claim 15 by reason of dependence.
Claim 16 recites “wherein the communicating of the neural network output to the remote device is in response to the network interface receiving access messages containing the identification of the first data” in lines 1-3. There is insufficient antecedent basis for the underlined limitations in the claim. The features of communicating, by the storage product using the network interface, the neural network output to a remote device in claim 13 was deleted in the amendment. For purposes of examination, this is interpreted as further comprising: receiving access messages containing the identification of the first data. Claim 17 inherit the same deficiency as claim 16 by reason of dependence.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 13 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jang et al. (US 20210150321 A1) hereinafter Jang, in view of Drane (US 20230205489 A1).
Regarding claim 1, Jang teaches an apparatus, comprising:
a storage product manufactured as a computer component, the storage product comprising (Jang Figs. 1-2, 4-5C, 7-8B and 12; storage product – storage device):
an artificial intelligence accelerator (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0086] “in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; paragraph [0052] “The first processor 410 and the second processor 420 in FIG. 2 may be the same as or similar to the first processor 312 and the second processor 314 in FIG. 1, respectively”; artificial intelligence accelerator – NPU 420/second processor 314);
a local storage device having a storage capacity accessible (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0054] “The buffer memory 430 may be configured to store instructions and data executed and processed by the first processor 410 and the second processor 420”; paragraphs [0079, 0083 and 0085-0086] “the neural network system includes at least one of various neural network systems and/or machine learning systems, e.g., an artificial neural network (ANN) system … in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; local storage device - nonvolatile memories and buffer memory); and
a host interface configured to be connected to a local host system to control access, made (Jang Figs. 2, 4-5C, 7-8B and 12; paragraph [0039]; paragraph [0056] “The host interface 440 may be configured to provide physical connections between the host device 200 and the storage device 300. For example, the host interface 440 may provide an interface corresponding to a bus format of the host for communication between the host device 200 and the storage device 300”; paragraph [0082]; host interface - host interface 440; local host system – host device);
wherein the storage product is configured to perform at least a portion of computations of the artificial neural network model using the artificial intelligence accelerator to generate a neural network output from neural input data received (Jang Figs. 5B-5C and paragraph [0086] “in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; neural network output – result data RDAT; neural input data - second input data IDAT).
Jang does not explicitly teach a network interface operable on a computer network; local storage device having a storage capacity accessible via the network interface; a host interface configured to be connected to a local host system to control access, made via the network interface, to the storage capacity; and wherein the storage product is configured to perform at least a portion of computations of the artificial neural network model using the artificial intelligence accelerator to generate a neural network output from neural input data received via the network interface.
However, on the same field of endeavor, Drane discloses a network interface operable on a computer network (Drane Fig. 12B and paragraphs [0214-0218] network interface - network interface 1210).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify Jang using Drane and configure the storage device to include a network interface in addition to the host interface in order to provide access to remote storage containing neural network model data or remote systems and/or in order to offload some operations of the host system to the network interface such as resource allocation and management operations and/or network and/or data security operations (Drane paragraphs [0215-0217]).
Therefore, the combination of Jang as modified in view of Drane teaches a network interface operable on a computer network; local storage device having a storage capacity accessible via the network interface; a host interface configured to be connected to a local host system to control access, made via the network interface, to the storage capacity; and wherein the storage product is configured to perform at least a portion of computations of the artificial neural network model using the artificial intelligence accelerator to generate a neural network output from neural input data received via the network interface.
Regarding claim 2, Jang as modified in view of Drane teaches all the limitations of claim 1 as stated above. Further, Jang as modified in view of Drane taches wherein the artificial intelligence accelerator includes a multiplier-accumulator unit (Jang paragraphs [0086] “the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT, and may transmit the calculation result data RDAT to the host device 200 … For example, the calculation result data RDAT may represent a result of multiplication and accumulation (MAC) operations performed by the neural network system”; multiplier-accumulator unit – components performing the MAC operations).
Regarding claim 13, Jang teaches a method, comprising:
providing, (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0039]; storage product – storage device; a local storage device – nonvolatile memories and buffer memory);
controlling, via a local host system connected to a host interface of the storage product, access to the storage capacity (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0039]; paragraph [0056] “The host interface 440 may be configured to provide physical connections between the host device 200 and the storage device 300. For example, the host interface 440 may provide an interface corresponding to a bus format of the host for communication between the host device 200 and the storage device 300”; paragraph [0082]; host interface - host interface 440; local host system – host device);
storing, in the storage product, an artificial neural network model having instructions executable by an artificial intelligence accelerator of the storage product (Jang Figs. 1-2, 4-5C, 7-8B and 12 and paragraph [0054] “The buffer memory 430 may be configured to store instructions and data executed and processed by the first processor 410 and the second processor 420”; paragraph [0085] “the neural network system includes at least one of various neural network systems and/or machine learning systems, e.g., an artificial neural network (ANN) system … in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; artificial intelligence accelerator - NPU 420/second processor 314);
receiving, (Jang Fig. 5A and paraph [0081] “in the second operation mode, second input data IDAT may be provided from the external interface 210 and the host interface 220 of the host device 200, and the storage device 300a may receive the second input data IDAT”);
performing, by the storage product using the artificial intelligence accelerator, at least a portion of computations of the artificial neural network model according to the instructions (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0086] “in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”); and
generating, by the storage product, a neural network output from the artificial neural network model having the neural input data as input (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0086] “in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; neural network output - result data RDAT).
Jang does not explicitly teach providing, via a network interface of a storage product, access to a storage capacity of a local storage device of the storage product; controlling, via a local host system connected to a host interface of the storage product, access to the storage capacity through the network interface; and receiving, in the network interface, first data specifying neural input data.
However, on the same field of endeavor, Drane discloses a network interface operable on a computer network that provides storage access to a storage device (Drane Fig. 12B and paragraphs [0214-0218] network interface - network interface 1210).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify Jang using Drane and configure the storage device to include a network interface in addition to the host interface in order to provide access such as receiving and transmitting data to remote storage containing neural network model data or remote systems and/or external interfaces and in order to offload some operations of the host system to the network interface such as resource allocation and management operations and/or network and/or data security operations (Drane paragraphs [0215-0217]).
Therefore, the combination of Jang as modified in view of Drane teaches providing, via a network interface of a storage product, access to a storage capacity of a local storage device of the storage product; controlling, via a local host system connected to a host interface of the storage product, access to the storage capacity through the network interface; and receiving, in the network interface, first data specifying neural input data.
Regarding claim 18, Jang teaches a computing device, comprising:
a computer bus (Jang Fig. 1 and paragraph [0028] computer bus – physical connection between the host device and the storage device);
a local host system connected to the computer bus (Jang Figs. 1-2, 4-5C, 7-8B and 12; local host system – host device or CPU 260); and
a storage product manufactured as a computer component, the storage product comprising (Jang Figs. 1-2, 4-5C, 7-8B and 12; storage product – storage device):
an artificial intelligence accelerator (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0086] “in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; paragraph [0052] “The first processor 410 and the second processor 420 in FIG. 2 may be the same as or similar to the first processor 312 and the second processor 314 in FIG. 1, respectively”; artificial intelligence accelerator – NPU 420/second processor 314);
a local storage device having a storage capacity accessible (Jang Figs. 1-2, 4-5C, 7-8B and 12; paragraph [0054] “The buffer memory 430 may be configured to store instructions and data executed and processed by the first processor 410 and the second processor 420”; paragraphs [0079, 0083 and 0085-0086] “the neural network system includes at least one of various neural network systems and/or machine learning systems, e.g., an artificial neural network (ANN) system … in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; local storage device - nonvolatile memories and buffer memory of the storage device); and
a bus connector connected to the computer bus (Jang Figs. 2, 4-5C, 7-8B and 12; paragraph [0056] “The host interface 440 may be configured to provide physical connections between the host device 200 and the storage device 300. For example, the host interface 440 may provide an interface corresponding to a bus format of the host for communication between the host device 200 and the storage device 300”; bus connector - host interface 440);
wherein the local host system is configured to control access, made (Jang Figs. 2, 4-5C, 7-8B and 12; paragraph [0039] “the storage device 300 may be connected to the host device 200 via a block accessible interface which may include, for example, a UFS, an eMMC, an NVMe bus, a SATA bus, a SCSI bus, a SAS bus, or the like. The storage device 300 may be configured to use a block accessible address space corresponding to an access size of the plurality of nonvolatile memories 320a, 320b, 320c and 320d to provide the block accessible interface to the host device 200, for allowing the access by units of a memory block with respect to data stored in the plurality of nonvolatile memories 320a, 320b, 320c and 320d”);
wherein the storage product is configured to perform at least a portion of computations of the artificial neural network model using the artificial intelligence accelerator to generate a neural network output from neural input data received (Jang Figs. 5B-5C and paragraph [0086] “in the second operation mode, the second processor 420 may perform the AI calculation based on the second input data IDAT received in FIG. 5A and the weight data WDAT loaded in FIG. 5B to generate calculation result data RDAT”; neural network output – result data RDAT; neural input data - second input data IDAT).
Jang does not explicitly teach a network interface operable on a computer network; a local storage device having a storage capacity accessible via the network interface; wherein the local host system is configured to control access, made via the network interface, to the storage capacity; and wherein the storage product is configured to perform at least a portion of computations of the artificial neural network model using the artificial intelligence accelerator to generate a neural network output from neural input data received via the network interface.
However, on the same field of endeavor, Drane discloses a network interface operable on a computer network (Drane Fig. 12B and paragraphs [0214-0218] network interface - network interface 1210).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify Jang using Drane and configure the storage device to include a network interface in addition to the host interface in order to provide access to remote storage containing neural network model data or remote systems and in order to offload some operations of the host system to the network interface such as resource allocation and management operations and/or network and/or data security operations (Drane paragraphs [0215-0217]).
Therefore, the combination of Jang as modified in view of Drane teaches a network interface operable on a computer network; a local storage device having a storage capacity accessible via the network interface; wherein the local host system is configured to control access, made via the network interface, to the storage capacity; and wherein the storage product is configured to perform at least a portion of computations of the artificial neural network model using the artificial intelligence accelerator to generate a neural network output from neural input data received via the network interface.
Regarding claim 19, Jang as modified in view of Drane teaches all the limitations of claim 18 as stated above. Further, Jang as modified in view of Drane teaches further comprises: a data generator (Jang paragraph [0081] data generator – microphone and/or camera; bulk data – voice data and/or image data).
However, on the same field of endeavor, Drane discloses writing or storing data via the network interface that is connected to the computer network (Drane paragraph [0218]).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify Jang using Drane and configure the external interfaces such as the microphone and/or camera to be connected to the computer network and to write the bulk data into the storage devices via the network interface by connecting the external interfaces to the network ports or I/O interface to couple external devices to the storage device (Drane paragraph [0218]).
Therefore, the combination of Jang as modified in view of Drane teaches further comprises: a data generator connected to the computer network and configured to write bulk data into storage capacity via the network interface, the bulk data specifying the neural input data.
Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Drane as applied to claim 2 above, and further in view of Hoang et al. (US 20210110235 A1), hereinafter Hoang.
Regarding claim 3, Jang as modified in view of Drane teaches all the limitations of claim 2 as stated above.
Jang does not explicitly teach wherein the multiplier-accumulator unit includes a crossbar array of memristors configured to perform multiplication of a matrix of weights by an array of inputs based on the memristors having resistance values being programmed according to the weights, rows of the crossbar array being applied voltages with magnitudes according to the inputs, and currents generated by the voltages as applied to columns of the crossbar array being summed in connections for the columns respectively.
However, on the same field of endeavor, Hoang discloses a multiplier-accumulator unit that includes a crossbar array of memristors configured to perform multiplication of a matrix of weights by an array of inputs based on the memristors having resistance values being programmed according to the weights, rows of the crossbar array being applied voltages with magnitudes according to the inputs, and currents generated by the voltages as applied to columns of the crossbar array being summed in connections for the columns respectively (Hoang Figs. 9-10 and paragraphs [0054-0055]).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify Jang in view of Drane using Hoang and configure the NPU to include a multiplier-accumulator unit that includes a crossbar array of memristors configured to perform multiplication of a matrix of weights by an array of inputs based on the memristors having resistance values being programmed according to the weights, rows of the crossbar array being applied voltages with magnitudes according to the inputs, and currents generated by the voltages as applied to columns of the crossbar array being summed in connections for the columns respectively in order to accelerate in-memory matrix multiplication operations for a neural network inference (Hoang abstract and paragraph [0022]).
Therefore, the combination of Jang as modified in view of Drane and Hoang teaches wherein the multiplier-accumulator unit includes a crossbar array of memristors configured to perform multiplication of a matrix of weights by an array of inputs based on the memristors having resistance values being programmed according to the weights, rows of the crossbar array being applied voltages with magnitudes according to the inputs, and currents generated by the voltages as applied to columns of the crossbar array being summed in connections for the columns respectively.
Regarding claim 4, Jang as modified in view of Drane teaches all the limitations of claim 2 as stated above.
Jang does not explicitly teach wherein the multiplier-accumulator unit includes an array of memory cells configured to perform multiplication of a matrix of weights by an array of inputs based on the memory cells being programmed to store bits of binary representation of the weights, rows of the memory cells being applied or not applied a predetermined read voltage according to bits of binary representation of the inputs, columns of the memory cells being connected to lines for the columns respectively to sum currents going through the columns of the memory cells, and currents in the lines being digitized for shift and summation in logic circuits.
However, on the same field of endeavor, Jang discloses a multiplier-accumulator unit that includes an array of memory cells configured to perform multiplication of a matrix of weights by an array of inputs based on the memory cells being programmed to store bits of binary representation of the weights, rows of the memory cells being applied or not applied a predetermined read voltage according to bits of binary representation of the inputs, columns of the memory cells being connected to lines for the columns respectively to sum currents going through the columns of the memory cells, and currents in the lines being digitized for shift and summation in logic circuits (Hoang Figs. 14-16 and paragraphs [0063-0065, 0068-0070]).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify Jang in view of Drane using Hoang and configure the NPU to include an array of memory cells configured to perform multiplication of a matrix of weights by an array of inputs based on the memory cells being programmed to store bits of binary representation of the weights, rows of the memory cells being applied or not applied a predetermined read voltage according to bits of binary representation of the inputs, columns of the memory cells being connected to lines for the columns respectively to sum currents going through the columns of the memory cells, and currents in the lines being digitized for shift and summation in logic circuits in order to accelerate in-memory matrix multiplication operations for a neural network (Hoang abstract and paragraph [0022]).
Therefore, the combination of Jang as modified in view of Drane and Hoang teaches wherein the multiplier-accumulator unit includes an array of memory cells configured to perform multiplication of a matrix of weights by an array of inputs based on the memory cells being programmed to store bits of binary representation of the weights, rows of the memory cells being applied or not applied a predetermined read voltage according to bits of binary representation of the inputs, columns of the memory cells being connected to lines for the columns respectively to sum currents going through the columns of the memory cells, and currents in the lines being digitized for shift and summation in logic circuits.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Drane as applied to claim 2 above, and further in view of Vantrease et al. (US 20190236049 A1), hereinafter Vantrease.
Regarding claim 5, Jang as modified in view of Drane teaches all the limitations of claim 2 as stated above.
Jang does not explicitly teach wherein the multiplier-accumulator unit includes an array of logic circuits configured to perform a plurality of multiplications in parallel.
However, on the same field of endeavor, Vantrease discloses a multiplier-accumulator unit that includes an array of logic circuits configured to perform a plurality of multiplications in parallel (Vantrease Figs. 1 and 7 and paragraphs [0024-0026]).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, to modify Jang in view of Drane using Vantrease and configure the NPU to include an array of logic circuits configured to perform a plurality of multiplications in parallel to accelerate the workload in neural networks (Vantrease paragraph [0017]).
Therefore, the combination of Jang as modified in view of Drane and Vantrease teaches wherein the multiplier-accumulator unit includes an array of logic circuits configured to perform a plurality of multiplications in parallel.
Allowable Subject Matter
Claims 6-12, 14-17 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if claims 14-17 are rewritten to overcome the 35 U.S.C. 112(b) rejections discussed above.
The following is a statement of reasons for the indication of allowable subject matter:
None of the prior art references cited explicitly teach or suggest, in combination with other limitations of the claims, the features of “communicate, using the network interface, the neural network output as a replacement of the first data to a remote device” as recited in claims 6 and 14; “communicate, via the network interface, the neural network output as a replacement of the bulk data to a remote device” as recited in claim 20; “transmit, using the network interface and to a remote device, an alert containing an identification of the first data” as recited in claims 8 and 15; and “compare the neural network output with alert generation criteria to generate an alert to a remote device; wherein the alert contains second data configured to identify at least the first data or the neural network output” as recited in claim 12. Claims 7, 9-11 and 16-17 would also be allowable for at least the same reasons as claims 6, 8 and/or 15 by reason of dependence.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Carlo Waje whose telephone number is (571)272-5767. The examiner can normally be reached 9:00-6:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Carlo Waje/Examiner, Art Unit 2151 (571)272-5767