DETAILED ACTION
Claims 1-24 are pending.
The office acknowledges the following papers:
Claims and remarks filed on 12/4/2025.
Withdrawn objections and rejections
The specification objection has been withdrawn.
New Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-24 are rejected under 35 U.S.C. 103 as being unpatentable over Diamant et al. (U.S. 10,592,250), in view of Official Notice.
As per claim 1:
Diamant disclosed an apparatus for scheduling an execution of compute kernels on one or more computing devices, the apparatus comprising interface circuitry, machine-readable instructions and processing circuitry to execute the machine-readable instructions to:
determine an impending execution of two or more compute kernels to the one or more computing devices (Diamant: Figures 1 and 3 elements 160 and 310, column 2 lines 37-54, column 3 lines 6-13, column 8 lines 36-51, and column 9 lines 30-39)(The broadest reasonable interpretation of compute kernel is based on published application paragraph 50, which states it may be a portion of a computer program that is offloaded to a computing device. The instruction loader receiving a user function that needs to be split into multiple sections (i.e. compute kernels) determines impending execution of the user function by the neural network processor (i.e. computing device).); and
pipeline a data transfer related to the execution of the two or more compute kernels to the one or more computing devices via the interface circuitry (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry).), wherein a transfer of input data and/or output data related to execution of one of the two or more compute kernels is temporally staggered with a transfer of input data and/or output data related to execution of another of the two or more compute kernels, respectively (Diamant: Figures 1, 2A-D, and 3 elements 112, 122-126, 138, 160, column 4 lines 38-48, column 5 lines 22-48, column 6 lines 58-64)(The memory holds intermediate and output results of processing by the computing engine. The state buffer caches inputs and weights from the memory for processing by the computing engine. The instruction loader receiving a user function that needs to be split into multiple sections performs the splitting. The split sections are repeatedly loaded into the instruction buffer of the computing engine. It would have been obvious to one of ordinary skill in the art that these split functions would require different inputs and weights for processing, which results in loading them into the state buffer at different times (i.e. temporally staggered). Additionally, official notice is given that programs include load and store operations to load data inputs closer to execution resources for processing and output data results to memory once processing finishes for the advantage of faster data access rates. Thus, it would have been obvious to one of ordinary skill in the art to implement load and store operations in Diamant that loads input data and weights into the state buffer as needed and stores data results/intermediate results to the memory as needed.).
As per claim 2:
Diamant disclosed the apparatus according to claim 1, wherein the data transfer is pipelined with an objective of reducing a time to execution of at least one of the two or more compute kernels (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Refilling the instruction buffer ahead of need reduces execution time of the split sections.).
As per claim 3:
Diamant disclosed the apparatus according to claim 1, wherein the data transfer is pipelined with an objective of reducing a power consumption or reducing a thermal impact of the execution of the two or more compute kernels or the data transfer (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Refilling the instruction buffer at different times reduces the thermal impacts of data transfers by breaking the data transfers up.).
As per claim 4:
Diamant disclosed the apparatus according to claim 1, wherein the data transfer is pipelined with an objective of balancing or increasing a utilization of the computing devices (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Refilling the instruction buffer ahead of need increases execution utilization.).
As per claim 5:
Diamant disclosed the apparatus according to claim 1, wherein the data transfer is pipelined with an objective of increasing a data processing throughput of the two or more compute kernels (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Refilling the instruction buffer ahead of need increases execution throughput.).
As per claim 6:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to pipeline the data transfer to the one or more computing devices such that a concurrent data transfer of data related to the two or more compute kernels is avoided (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Software instructions in an embodiment are implemented to perform the refill step.).
As per claim 7:
Diamant disclosed the apparatus according to claim I, wherein the execution of the two or more compute kernels depends on the data transfer, wherein the machine-readable instructions comprise instructions to pipeline the data transfer such that execution of at least one second of the two or more compute kernels and associated data is delayed until at least a portion of the data transfer related to a first compute kernel required for starting execution of the first compute kernel is completed (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Software instructions in an embodiment are implemented to perform the refill step. As such, the execution of the instructions within the split sections of the user function depend upon being transferred to the neural network processor by the software refill instructions. A first software refill instruction causes instructions to be send to the instruction buffer. A second later software refill instruction causes instructions to be send to the instruction buffer. The second software refill instruction occurs after at least a portion of instructions from the first software refill instruction are loaded into the instruction buffer.).
As per claim 8:
Diamant disclosed the apparatus according to claim I, wherein the machine-readable instructions comprise instructions to determine, at least for a first compute kernel, a first portion of the data transfer required for starting execution of the first compute kernel and a second portion of the data transfer used after the execution of the first compute kernel is started, and to pipeline the data transfer such that the data transfer related to at least one second compute kernel is delayed until the data transfer of the first portion is completed (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). The data transfers are pipelined using a refill threshold to implement loading further instructions for subsequent sections of the user function. Figures 2A-D show a first transfer is completed prior to beginning a second transfer of instructions.).
As per claim 9:
Diamant disclosed the apparatus according to claim 8 wherein the machine-readable instructions comprise instructions to determine the first and second portions of the data transfer by performing static compiler analysis of the compute kernel (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 37-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Software instructions in an embodiment are implemented to perform the refill step as part of executing the processing block in the GPU. Refill instructions are added to instruction code based on prefetching distance needed. Official notice is given that compilers can perform static analysis for the advantage of implementing program understanding/comprehension. Thus, it would have been obvious to one of ordinary skill in the art that the inserted software refill instructions are added by the compiler using static analysis.).
As per claim 10:
Diamant disclosed the apparatus according to claim 8, wherein the machine-readable instructions comprise instructions to emulate the execution of the compute kernel, and to determine the first and second portions based on memory accesses occurring during the emulation (Diamant: Figures 1 and 3 elements 160 and 310, column 2 lines 37-54, column 3 lines 6-13, column 8 lines 36-51, and column 9 lines 30-39)(The instruction loader receiving a user function that needs to be split into multiple sections (i.e. compute kernels) determines impending execution of the user function by the neural network processor. Official notice is given that user functions not supported by hardware can be emulated by software instructions for the advantage of reducing hardware circuitry costs. Thus, it would have been obvious to one of ordinary skill in the art to implement emulation in Diamant for execution functions not directly supported by hardware.).
As per claim 11:
Diamant disclosed the apparatus according to claim 8, wherein the machine-readable instructions comprise instructions to determine the first and second portions based on a monitoring of a prior execution of the respective compute kernel by the one or more computing devices (Diamant: Figures 1 and 2A-D elements 150, 220, and 230, column 2 lines 37-54, column 3 lines 6-20, column 6 lines 58-65, and column 7 lines 31-67 continued to column 8 lines 1-23)(The instruction buffer is monitored based on the distance between the head and tail pointer. When the available space is greater than a threshold in the instruction buffer, a refill of instructions is performed. Official notice is given that logic performed in hardware can also be performed by software instructions for the advantage of reduced costs. Thus, it would have been obvious to one of ordinary skill in the art to implement the monitoring via software instructions based on execution of instructions in the instruction buffer.).
As per claim 12:
Diamant disclosed the apparatus according to claim 8, wherein the machine-readable instructions comprise instructions to determine the first and second portions based on user-specified information on the use of the data (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 37-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Software instructions in an embodiment are implemented to perform the refill step as part of executing the processing block in the GPU.).
As per claim 13:
Diamant disclosed the apparatus according to claim 8, wherein the machine-readable instructions comprise instructions to determine the first and second portions using heuristics regarding the location of the first portion within the data transfer (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 37-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Software instructions in an embodiment are implemented to perform the refill step as part of executing the processing block in the GPU. Refill instructions are added to instruction code based on prefetching distance needed (i.e. heuristic).).
As per claim 14:
Diamant disclosed the apparatus according to claim 8, wherein the machine-readable instructions comprise instructions to pipeline the data transfer such that a first portion of the data transfer related to the second compute kernel is started before the data transfer of the second portion of the data transfer related to the first compute kernel is started (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 37-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). The initial section of a user function (i.e. first portion) is loaded to the instruction buffer prior to later sections of a user function (i.e. second portion).).
As per claim 15:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to pipeline the data transfer based on a data communication bandwidth available for transferring the two or more compute kernels (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 37-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Software instructions in an embodiment are implemented to perform the refill step as part of executing the processing block in the GPU. Official notice is given that interconnects and buses have data bandwidth limits for the advantage of reducing hardware costs. Thus, it would have been obvious to one of ordinary skill in the art that the software instruction refills takes a number of transfer cycles based on the data bandwidth of the buses and interconnects from the instruction loader.).
As per claim 16:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to split an initial compute kernel to generate the two or more compute kernels (Diamant: Figures 1 and 3 elements 160 and 310, column 2 lines 37-54, column 3 lines 6-13, column 8 lines 36-51, and column 9 lines 30-39)(The instruction loader receiving a user function that needs to be split into multiple sections (i.e. compute kernels) determines impending execution of the user function by the neural network processor (i.e. computing device). Official notice is given that logic performed in hardware can also be performed by software instructions for the advantage of reduced costs. Thus, it would have been obvious to one of ordinary skill in the art to implement the instruction loader function via software instructions.).
As per claim 17:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to provide a runtime environment for executing the two or more compute kernels using the one or more computing devices, with the runtime environment performing the determination of the impending execution and the pipelining of the data transfer (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect. The instruction loader refills the instruction buffer ahead of need (i.e. runtime environment). Alternatively, software instructions in an embodiment are implemented to perform the refill step (i.e. runtime environment) as part of executing the processing block in the GPU.).
As per claim 18:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to host a driver for accessing the one or more computing devices, with the driver performing the determination of the impending execution and the pipe- lining of the data transfer (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect. Software instructions in an embodiment are implemented to perform the refill step (i.e. drivers) as part of executing the processing block in the GPU.).
As per claim 19:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to compile a computer program comprising the two or more compute kernels, with the compilation being based on the pipelining of the data transfer (Diamant: Figure 1 element 160, column 3 lines 6-20)(The instruction loader in an embodiment is a compiler that implements the instruction data transfer to the neural network processor.).
As per claim 20:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to use a hardware queuing mechanism of a computer system hosting the apparatus to pipeline the data transfer (Diamant: Figure 1 elements 150-160, column 3 lines 36-46)(A refill page (i.e. hardware queue) can be used in an embodiment to perform instruction transfers into the instruction buffer.).
As per claim 21:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to assign the two or more compute kernels to respective compute circuitry of the one or more computing devices, and to pipeline the data transfer of the two or more compute kernels to the respective computing device based on the assignment, with the assignment being performed based on at least one of a constraint with respect to a time to execution, a throughput constraint, a utilization constraint, a power consumption constraint, and a thermal constraint (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Refilling the instruction buffer ahead of need is based on a utilization constraint (i.e. size of the instruction buffer).).
As per claim 22:
Diamant disclosed the apparatus according to claim 1, wherein the machine-readable instructions comprise instructions to adapt at least one of a capability of at least one interconnect and a capability of at least one memory system being involved in the execution and/or data transfer based on the pipelined data transfer (Diamant: Figures 1, 2A-D, and 3 elements 114-116, 150, 160, 240, 260, 330, and 360, column 2 lines 49-65, column 4 lines 49-57, column 5 lines 1-8, column 6 lines 58-64, column 7 lines 31-37, column 8 lines 10-23, column 8 lines 53-59, column 10 lines 27-41, and column 10 lines 64-67)(The split sections of a user function are repeatedly loaded at different increments in time into the instruction buffer (i.e. pipeline a data transfer) via the interconnect (i.e. interface circuitry). Software instructions in an embodiment are implemented to perform the refill step. The software instructions are inserted based on a capability of the instruction buffer to store instructions for execution.).
As per claim 23:
Claim 23 essentially recites the same limitations of claim 1. Therefore, claim 23 is rejected for the same reasons as claim 1.
As per claim 24:
Claim 24 essentially recites the same limitations of claim 1. Therefore, claim 24 is rejected for the same reasons as claim 1.
Response to Arguments
The arguments presented by Applicant in the response, received on 12/4/2025 are considered persuasive.
Applicant argues for claims 1 and 23-24:
“Diamant discloses a scheme for refilling the instruction buffer. Diamant discloses that to refill the instruction buffer while instructions in the instruction buffer are being executed by the execution engine, an entity may be used to determine whether the instruction buffer has space available for storing new instructions, and to load new instructions into the instruction buffer when the instruction buffer has space available. (Diamant, column 7 lines 1-6.) However, Diamant fails to disclose or teach temporally staggering input/output data transfers to different compute kernels. In claim 1, the input/output data transfer for two or more compute kernels are pipelined, i.e., the input/output data transfer for the first compute kernel is pipelined with the input/output data transfer for the second kernel. Diamant fails to disclose that a data transfer related to the execution of the two or more compute kernels is pipelined such that a transfer of input data and/or output data related to execution of one of the two or more compute kernels is temporally staggered with a transfer of input data and/or output data related to execution of another of the two or more compute kernels, respectively.
Diamant merely discloses a scheme for loading the instruction codes to keep an instruction buffer full. Diamant discloses the transfer of instruction codes with the execution of other instruction codes to prevent the instruction buffer from running empty. However, Diamant fails to disclose that transfers of input/output data for two or more compute kernels are temporally staggered. Diamant fails to disclose a scheme for staggering the input/output data transfers in relation to the execution of two or more compute kernels.”
This argument is found to be persuasive for the following reason. The examiner agrees that Diamant failed to anticipate the newly claimed limitations. However, a new ground of rejection has been given due to the amendment.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The following is text cited from 37 CFR 1.111(c): In amending in reply to a rejection of claims in an application or patent under reexamination, the applicant or patent owner must clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. The applicant or patent owner must also show how the amendments avoid such references or objections.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB A. PETRANEK whose telephone number is (571)272-5988. The examiner can normally be reached on M-F 8:00-4:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on (571) 270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACOB PETRANEK/Primary Examiner, Art Unit 2183