DETAILED ACTION
Claims 1-10 and 12-25 are amended. Claims 1-25 are pending in the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner’s Notes
The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/02/2026 has been entered.
Response to Amendment
Amendments to paragraphs [0082], [0094], [0097], [0109], [0110], [0116], [0193], [0255], [0284], [0311], [0365], and [0369] are fully considered and are satisfactory to overcome the objections directed to the specification in the previous Office Action.
Amendments to claims 1 and 7 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. §112(b) directed to claims 1-12 in the previous Office Action.
Amendments to claims 1, 7, 13, and 19 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. §101 directed to claims 1-25 in the previous Office Action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-25 are rejected under 35 U.S.C. 103 as being unpatentable over Lutz et al. (“Helium: A Transparent Inter-kernel Optimizer for OpenCL”; from IDS filed on 08/26/2022; hereinafter Lutz) in view of Manion et al. (US 2004/0111469 A1; hereinafter Manion).
With respect to claim 1, Lutz teaches: One or more processors, comprising:
one or more circuits (see e.g. Lutz, page 77, column 2, paragraph 6: “The machine used for the test has an Intel Core i7-4770K CPU with 16GB of RAM and an Nvidia GeForce GTX 780 GPU connected via PCI-E 3.0. We use the OpenCL 1.1 implementation included in Nvidia’s Linux driver 331.79”) to:
receiving an application programming interface (API) call (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls”; and page 72, Fig. 2: “OpenCL API”, “The calls to OpenCL functions from a target application are intercepted”) to access a stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1) while the stream is in capture mode (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls”; and page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted”); and
in response to the API call, return… a graph (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls, our system is able to build a dynamic task and data dependency graph of the OpenCL application”; page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 73, column 2, paragraph 4: “The dependency analyzer uses the runtime information gathered from intercepting the OpenCL API function calls to build an abstract representation of the program and its execution flow. The result is a task graph of inter-dependent OpenCL commands”) that is capturing the stream (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls, our system is able to build a dynamic task and data dependency graph”; page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 73, column 2, paragraph 4: “The dependency analyzer uses the runtime information gathered from intercepting the OpenCL API function calls to build an abstract representation of the program and its execution flow. The result is a task graph”).
Lutz discloses intercepting (capturing) OpenCL API calls from an application to access a stream of operations provided by the OpenCL (e.g. enqueueWriteBuffer, enqueueNDRangeKernel, enqueueReadBuffer, etc., as shown in Fig. 1) and generating a task graph based the intercepted API calls (i.e. returns a task graph as an output in response to the captured API calls).
However, Lutz does not explicitly disclose returning “a location” of this task graph.
On the other hand, Manion teaches:
a location of (see e.g. Manion, paragraph 50: “Once a node has created or opened a graph, it receives a graph handle”; and paragraph 61: “a call to the peer graph create interface creates an entirely new graph… This call results in a graph handle being allocated… The output of this API is set to the handle for the graph”)
Lutz and Manion are analogous art because they are in the same field of endeavor: generating and maintaining graph information. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Lutz with the teachings of Manion. The motivation/suggestion would be to improve graph creation and management process.
With respect to claim 2, Lutz as modified teaches: The one or more processors of claim 1,
Lutz does not but Manion teaches:
wherein to return the location further comprises obtaining a handle corresponding to the graph (see e.g. Manion, paragraph 50: “Once a node has created or opened a graph, it receives a graph handle”; and paragraph 61: “a call to the peer graph create interface creates an entirely new graph… This call results in a graph handle being allocated… The output of this API is set to the handle for the graph”).
Lutz and Manion are analogous art because they are in the same field of endeavor: generating and maintaining graph information. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Lutz with the teachings of Manion. The motivation/suggestion would be to improve graph creation and management process.
With respect to claim 3, Lutz as modified teaches: The one or more processors of claim 1, wherein the one or more circuits are further to modify the graph (see e.g. Lutz, page 74, column 1, paragraph 2: “adding data and temporal dependencies”),
Lutz does not but Manion teaches:
based at least in part, on the location (see e.g. Manion, paragraph 50: “Once a node has created or opened a graph, it receives a graph handle. This handle is used in most of the following graphing APIs”; and paragraph 70-74).
Lutz and Manion are analogous art because they are in the same field of endeavor: generating and maintaining graph information. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Lutz with the teachings of Manion. The motivation/suggestion would be to improve graph creation and management process.
With respect to claim 4, Lutz as modified teaches: The one or more processors of claim 1, wherein the graph is being generated using stream capture (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls, our system is able to build a dynamic task and data dependency graph”; page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 73, column 2, paragraph 4: “The dependency analyzer uses the runtime information gathered from intercepting the OpenCL API function calls to build an abstract representation of the program and its execution flow. The result is a task graph”).
With respect to claim 5, Lutz as modified teaches: The one or more processors of claim 1, wherein the API call is to a runtime API (see e.g. Lutz, page 73, column 2, paragraph 4: “uses the runtime information gathered from intercepting the OpenCL API function calls to build an abstract representation of the program and its execution flow”).
With respect to claim 6, Lutz as modified teaches: The one or more processors of claim 1, wherein the graph is to indicate one or more operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”), corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1), executable by one or more graphics processing units (GPUs) (see e.g. Lutz, page 77, column 2, paragraph 6: “The machine used for the test has an Intel Core i7-4770K CPU with 16GB of RAM and an Nvidia GeForce GTX 780 GPU… Since Helium’s backend relies on Nvidia’s open source PTX backend, we only evaluated the benchmarks on the GPU”).
With respect to claim 7: Claim 7 is directed to a system comprising one or more computers having one or more processors implementing functions corresponding to the functions implemented by the processor disclosed in claim 1; please see the rejection directed to claim 1 above which also cover the limitations recited in claim 7.
With respect to claim 8, Lutz as modified teaches: The system of claim 7, wherein the one or more processors are further to execute one or more operations indicated by the graph (see e.g. Lutz, page 72, Fig. 2: “build a task graph, which is optimized before being executed by the vendor implementation”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”)
Lutz does not but Manion teaches:
based, at least in part, on the location (see e.g. Manion, paragraph 50: “Once a node has created or opened a graph, it receives a graph handle. This handle is used in most of the following graphing APIs”; and paragraph 70-74).
Lutz and Manion are analogous art because they are in the same field of endeavor: generating and maintaining graph information. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Lutz with the teachings of Manion. The motivation/suggestion would be to improve graph creation and management process.
With respect to claim 9, Lutz as modified teaches: The system of claim 7, wherein the one or more processors are further to obtain a reference to a set of nodes associated with the graph (see e.g. Lutz, page 73, column 2, Fig. 3: “Each node is an OpenCL command and an edge represents a dependency”).
With respect to claim 10, Lutz as modified teaches: The system of claim 7, wherein the graph is to indicate one or more operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”), corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1), and dependencies among the one or more operations (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls, our system is able to build a dynamic task and data dependency graph”; and page 74, column 1, paragraph 2: “By tracking OpenCL handles, and in particular handles on allocated device memory, Helium can infer relations between actions by adding data and temporal dependencies”).
With respect to claim 11, Lutz as modified teaches: The system of claim 7, wherein the location is identified by a reference to the location (see e.g. Lutz, page 73, column 2, paragraph 2: “handles represent pointers”; and column 1, paragraph 2: “tracking OpenCL handles, and in particular handles on allocated device memory).
With respect to claim 12, Lutz as modified teaches: The system of claim 7, wherein the graph is to encode a set of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”), corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1), executable by one or more general purpose graphics processing units (GPGPUs) (see e.g. Lutz, page 77, column 2, paragraph 6: “The machine used for the test has an Intel Core i7-4770K CPU with 16GB of RAM and an Nvidia GeForce GTX 780 GPU… Since Helium’s backend relies on Nvidia’s open source PTX backend, we only evaluated the benchmarks on the GPU”).
With respect to claim 13: Claim 13 is directed to a non-transitory machine-readable medium having stored thereon a set of instructions to implement functions corresponding to the functions implemented by the processor disclosed in claim 1; please see the rejection directed to claim 1 above which also cover the limitations recited in claim 13. Note that, Lutz also discloses executing instructions on a machine to implement functions (see e.g. Lutz, page 77, column 1, paragraph 6) corresponding to the functions of the processor disclosed in claim 1.
With respect to claim 14, Lutz as modified teaches: The non-transitory machine-readable medium of claim 13, wherein the set of instructions further include instructions which, if performed by the one or more processors, cause the one or more processors to:
obtain the graph based, at least in part, on obtaining a handle corresponding to the graph (see e.g. Lutz, page 74, column 1, paragraph 2: “By tracking OpenCL handles, and in particular handles on allocated device memory, Helium can infer relations between actions by adding data and temporal dependencies”); and
perform one or more operations, corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph, which is optimized before being executed by the vendor implementation”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”), based, at least in part, on the graph (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls, our system is able to build a dynamic task and data dependency graph”; and page 74, column 1, paragraph 2: “By tracking OpenCL handles, and in particular handles on allocated device memory, Helium can infer relations between actions by adding data and temporal dependencies”).
With respect to claim 15, Lutz as modified teaches: The non-transitory machine-readable medium of claim 13, wherein the graph is to encode one or more kernels (see e.g. Lutz, page 73, column 1: “Kernel Objects”, “Kernel Invocations”; and page 74, column 1, paragraph 1: “When a kernel is invoked, its parameter list is copied to the invocation object, along with its relationships to other objects. By tracking OpenCL handles, and in particular handles on allocated device memory, Helium can infer relations between actions by adding data and temporal dependencies”).
With respect to claim 16, Lutz as modified teaches: The non-transitory machine-readable medium of claim 13, wherein the graph encodes operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”), corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1), executable by the one or more parallel processing units (PPUs) (see e.g. Lutz, page 77, column 2, paragraph 6: “The machine used for the test has an Intel Core i7-4770K CPU with 16GB of RAM and an Nvidia GeForce GTX 780 GPU… Since Helium’s backend relies on Nvidia’s open source PTX backend, we only evaluated the benchmarks on the GPU”; page 75, column 2, paragraph 2: “Task Reordering and Parallelization”; and page 78, column 1, paragraph 2: “Helium is able to introduce task parallelism from the baseline using its parallelizing scheduler”).
With respect to claim 17, Lutz as modified teaches: The non-transitory machine-readable medium of claim 13, wherein the API call is to a driver API (see e.g. Lutz, page 77, column 2, paragraph 6: “We use the OpenCL 1.1 implementation included in Nvidia’s Linux driver 331.79”).
With respect to claim 18, Lutz as modified teaches: The non-transitory machine-readable medium of claim 13, wherein the set of instructions further include instructions which, if performed by the one or more processors, cause the one or more processors to encode a set of operations in the graph (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”)…, the set of operations corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1).
Lutz does not but Manion teaches:
based, at least in part, on the location (see e.g. Manion, paragraph 50: “Once a node has created or opened a graph, it receives a graph handle. This handle is used in most of the following graphing APIs”; and paragraph 70-74).
Lutz and Manion are analogous art because they are in the same field of endeavor: generating and maintaining graph information. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Lutz with the teachings of Manion. The motivation/suggestion would be to improve graph creation and management process.
With respect to claim 19: Claim 19 is directed to a method corresponding to the functions implemented by the processor disclosed in claim 1; please see the rejection directed to claim 1 above which also cover the limitations recited in claim 19.
With respect to claim 20, Lutz as modified teaches: The method of claim 19, further comprising obtaining a status indication based, at least in part, on an API (see e.g. Lutz, page 74, column 2, paragraph 8: “Horizontal Fusion. When several nodes are at the same depth in the task graph, or more generally when there is no path between two nodes, these nodes are data independent. The absence of a path indicates that their relative ordering does not matter, or they can even be executed at the same time”; and page 75, column 1, paragraph 2: “Vertical Fusion. A path between two nodes indicates a data dependency: the first node produces data, which is then consumed by the second, indicating a temporal relationship between the nodes”),
Lutz does not but Manion teaches:
in response the API call identifying the location of the one or more portions of the graph code (see e.g. Manion, paragraph 50: “Once a node has created or opened a graph, it receives a graph handle. This handle is used in most of the following graphing APIs”; and paragraph 68-77).
Lutz and Manion are analogous art because they are in the same field of endeavor: generating and maintaining graph information. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Lutz with the teachings of Manion. The motivation/suggestion would be to improve graph creation and management process.
With respect to claim 21, Lutz as modified teaches: The method of claim 19, wherein the graph includes one or more nodes corresponding to one or more operations (see e.g. Lutz, page 73, column 1, Fig. 3: “Each node is an OpenCL command”), the one or more operations corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1).
With respect to claim 22, Lutz as modified teaches: The method of claim 19, further comprising obtaining a value indicating a size of a set of nodes associated with the graph (see e.g. Lutz, page 74, column 2, paragraph 8: “When several nodes are at the same depth in the task graph, or more generally when there is no path between two nodes, these nodes are data independent… Helium groups nodes for which the input and output sets are disjoint. In the task graph presented in Figure 3, kernels A and B are data independent since their input and output set do not overlap: ({b1} ∪ {b1}) ∩ ({b2} ∪ {b3}) = ∅”).
With respect to claim 23, Lutz as modified teaches: The method of claim 19, further comprising adding one or more nodes to the graph (see e.g. Lutz, page 70, column 2, paragraph 4: “Intercepting all OpenCL API library calls, our system is able to build a dynamic task and data dependency graph”; and page 74, column 1, paragraph 2: “By tracking OpenCL handles, and in particular handles on allocated device memory, Helium can infer relations between actions by adding data and temporal dependencies”)
Lutz does not but Manion teaches:
based, at least in part, on the location (see e.g. Manion, paragraph 50: “Once a node has created or opened a graph, it receives a graph handle. This handle is used in most of the following graphing APIs”; and paragraph 68-77).
Lutz and Manion are analogous art because they are in the same field of endeavor: generating and maintaining graph information. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Lutz with the teachings of Manion. The motivation/suggestion would be to improve graph creation and management process.
With respect to claim 24, Lutz as modified teaches: The method of claim 19, wherein the graph is to encode operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”), corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1), executable by the one or more central processing units (CPUs) (see e.g. Lutz, page 77, column 2, paragraph 6: “The machine used for the test has an Intel Core i7-4770K CPU”).
With respect to claim 25, Lutz as modified teaches: The method of claim 19, wherein the graph is to indicate one or more operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application are intercepted to gather profile information. This information is then analyzed and combined to build a task graph”; and page 71, Fig. 1: “non-blocking operations”, “Blocking operations”), corresponding to the stream of operations (see e.g. Lutz, page 72, Fig. 2: “calls to OpenCL functions from a target application”; page 71, column 1, paragraph 7: “Complex applications often define computation as a stream of data through a set of independent operators, creating a modular and maintainable code base” and Fig. 1)), executable by one or more graphics processing units (GPUs) (see e.g. Lutz, page 77, column 2, paragraph 6: “The machine used for the test has an Intel Core i7-4770K CPU with 16GB of RAM and an Nvidia GeForce GTX 780 GPU… Since Helium’s backend relies on Nvidia’s open source PTX backend, we only evaluated the benchmarks on the GPU”).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 7, 13, and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Specifically, even though Lutz discloses generating a task graph by capturing a stream of API calls directed to invoking operations provided by OpenCL API (i.e. outputting a graph in response to the API calls), Lutz does not explicitly disclose returning the graph location as an output.
However, Manion discloses returning a graph handle (i.e. a graph locator) in response to a corresponding API call. As such, Lutz in view of Manion teaches the limitations recited in claims 1, 7, 13, and 19. For more details, please see the corresponding rejections above.
CONCLUSION
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Schmidt et al. (US 2009/0150431 A1) discloses providing and utilizing a set of references to relationship graphs associated with business objects (see paragraph 105).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Umut Onat whose telephone number is (571)270-1735. The examiner can normally be reached M-Th 9:00-7:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin L Young can be reached on (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UMUT ONAT/Primary Examiner, Art Unit 2194