DETAILED ACTION
Claims 1-4, 7-11, 14-18 and 21-26 are pending in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7, 8-11, 14-18 and 21-25 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2011/0063313 A1 to Bolz et al. in view of U.S. Pat. No. 11,055,812 A1 issued to Asthana.
As to claim 1, Bolz teaches one or more processors, comprising:
circuitry (Graphics 150) to, in response to an application programming interface call (a stream of graphics API operations/a stream of graphics application programming interface (API) commands), retrieve one or more flags (memoryBarrierNV( )) of a graphics processing unit (GPU), wherein: the one or more flags indicate cause an indication to be read of whether one or more memory operations to be performed (memoryBarrierNV( ) OpenGL Shading Language (GLSL) operation--along with the "MEMBAR" assembly operation, provides explicit synchronization that ensures a proper ordering of read and write operations) (“…To alleviate the aforementioned synchronization inaccuracies while avoiding the overhead of the automatic synchronization mechanism discussed above, we can provide explicit synchronization commands into the received graphics command streams. Explicit synchronization ensures that the effects of buffer and texture data stores performed by one or more shader programs to a portion of memory are visible to subsequent commands that access the same portion of memory. For example, a graphics command stream may include one or more memory operations that are completed in an undefined order. To provide a defined order of execution, the GPU 150 may perform an explicit synchronization at various points within the graphics command stream. This can be accomplished by configuring the GPU 150 to track the execution state of each of the commands in order to effectively determine whether all commands have completed in execution…The memoryBarrierNV( ) OpenGL Shading Language (GLSL) operation--along with the "MEMBAR" assembly operation, provides explicit synchronization that ensures a proper ordering of read and write operations within a shader thread. Memory operations scheduled for execution prior to the memory barrier command are all guaranteed to have completed to a point of coherence when the memory barrier command completes in execution. Further, the compiler does not re-order any load and store memory operations that are scheduled to execute subsequent to a memory barrier command, preventing any automatic optimizations from compromising the guaranteed point of coherence while permitting optimizations between barriers….The memory barrier command provides stronger ordering of read and write operations performed by a single thread. When a memory barrier command is executed, any memory operations issued by the thread prior to the memory barrier command are guaranteed to be completed before any subsequent memory operations are performed. Memory barrier commands are needed for algorithms that allow multiple threads to access the same memory location. For such algorithms, memory operations associated with that memory location need to be performed in a partially-defined relative order...” paragraphs 0038-0040/claim 2).
Bolz is silent with reference to the GPU context corresponds to a data structure that stores the one or more flags, the data structure associated with a subset of memory operations performed by a GPU, and
the one or more flags indicate whether one or more the subset of memory operations are to be performed synchronously or asynchronously.
Asthana teaches the GPU context corresponds to a data structure (data structure) that stores the one or more flags (dependency/barrier/wait count), the data structure associated with a subset of memory operations (For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes/Read After Write, or RAW, dependency) performed by a GPU (GPU 530), and
the one or more flags indicate whether one or more the subset of memory operations are to be performed synchronously or asynchronously (one or more command queues/Command Queues 519.sub.0-519.sub.N) (“…In some embodiments, the host CPU may then encode the actual commands that are to be launched on the GPU hardware. Next, the host CPU (or GPU firmware, in some implementations) may add the determined dependency information based on the above-described dependency analysis for each incoming command into a data structure and use the information in the data structure to construct and maintain an execution graph indicating an execution order of the commands. For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes. Next, in implementations wherein the GPU firmware is generating the execution graph, a background thread executing on the GPU's firmware may fetch commands from one or more command queues. The background execution thread may then fetch the encoded dependencies, along with actual command to launch on GPU. In implementations where the host CPU is generating the execution graph, the background execution thread on the GPU firmware may fetch only the actual commands to launch on GPU, e.g., in graph walk-order, from graph data structure of the execution graph. In some embodiments, the background execution thread on the GPU firmware may also perform additional pre-processing operations on the commands that are to be launched on GPU…According to some embodiments, each command in the execution graph may be associated with a wait count, where the wait count is indicative of the number of (e.g., zero or more) parent commands a particular (child) command depends on. Typically, the particular command can be executed on the GPU after execution of its parent commands has been completed (i.e., wait count=0) or if the particular command does not have any parents (e.g., is a root node where wait count is also zero)…As explained previously, commands fetched by firmware 520 from command queues 519.sub.0-519.sub.N may have various dependencies on each other. As a result, a particular execution order determined based on the dependency must be enforced while executing commands from command queues 519.sub.0-519.sub.N on GPU 530. One example of a dependency is when data generated by a first command (e.g., graphics or compute command or micro-command) is needed for processing a second command. This is also referred to herein as a Read After Write, or RAW, dependency. As such, GPU 530 may not be able to start execution of the second command until its prerequisite one or more (first) commands are completely processed. Lack of any dependency relationship between any two commands means both commands can be executed in parallel (or in any relative order, e.g., if the hardware is only capable of executing a single command at a time). Conversely, in order to enforce an ordering between two commands, associated dependency must be established. Commands of the same command queue may have dependencies, such that a child command of the queue is dependent upon execution of a parent command of the same queue. Commands belonging to different command queues may also have dependencies between each other…” Col. 5 Ln. 42-65, Col. 6 Ln. 1-9, Col. 12 Ln. 63-67, Col. 12 Ln. 1-18).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz with the teaching of Asthana because the teaching of Asthana would improve the system of Bolz by providing a technique for increasing GPU efficiency by reducing the amount of time the GPU stays idle while waiting for the next command for execution by reducing latency and increasing parallelism in the submission of graphics or computational commands.
As to claim 2, Asthana teaches the one or more processors of claim 1, wherein the one or more flags include a context flag of a context (data structure) to be used by the circuitry to perform the subset of memory operations and the one or more other memory operations (For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes/Read After Write, or RAW, dependency) (“…In some embodiments, the host CPU may then encode the actual commands that are to be launched on the GPU hardware. Next, the host CPU (or GPU firmware, in some implementations) may add the determined dependency information based on the above-described dependency analysis for each incoming command into a data structure and use the information in the data structure to construct and maintain an execution graph indicating an execution order of the commands. For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes. Next, in implementations wherein the GPU firmware is generating the execution graph, a background thread executing on the GPU's firmware may fetch commands from one or more command queues. The background execution thread may then fetch the encoded dependencies, along with actual command to launch on GPU. In implementations where the host CPU is generating the execution graph, the background execution thread on the GPU firmware may fetch only the actual commands to launch on GPU, e.g., in graph walk-order, from graph data structure of the execution graph. In some embodiments, the background execution thread on the GPU firmware may also perform additional pre-processing operations on the commands that are to be launched on GPU…According to some embodiments, each command in the execution graph may be associated with a wait count, where the wait count is indicative of the number of (e.g., zero or more) parent commands a particular (child) command depends on. Typically, the particular command can be executed on the GPU after execution of its parent commands has been completed (i.e., wait count=0) or if the particular command does not have any parents (e.g., is a root node where wait count is also zero)…As explained previously, commands fetched by firmware 520 from command queues 519.sub.0-519.sub.N may have various dependencies on each other. As a result, a particular execution order determined based on the dependency must be enforced while executing commands from command queues 519.sub.0-519.sub.N on GPU 530. One example of a dependency is when data generated by a first command (e.g., graphics or compute command or micro-command) is needed for processing a second command. This is also referred to herein as a Read After Write, or RAW, dependency. As such, GPU 530 may not be able to start execution of the second command until its prerequisite one or more (first) commands are completely processed. Lack of any dependency relationship between any two commands means both commands can be executed in parallel (or in any relative order, e.g., if the hardware is only capable of executing a single command at a time). Conversely, in order to enforce an ordering between two commands, associated dependency must be established. Commands of the same command queue may have dependencies, such that a child command of the queue is dependent upon execution of a parent command of the same queue. Commands belonging to different command queues may also have dependencies between each other…” Col. 5 Ln. 42-65, Col. 6 Ln. 1-9, Col. 12 Ln. 63-67, Col. 12 Ln. 1-18).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz with the teaching of Asthana because the teaching of Asthana would improve the system of Bolz by providing a technique for increasing GPU efficiency by reducing the amount of time the GPU stays idle while waiting for the next command for execution by reducing latency and increasing parallelism in the submission of graphics or computational commands.
As to claim 3, Asthana teaches the one or more processors of claim 1, wherein the one or more memory operations are to be performed after the subset of other memory operations complete (For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes/Read After Write, or RAW, dependency) (“…In some embodiments, the host CPU may then encode the actual commands that are to be launched on the GPU hardware. Next, the host CPU (or GPU firmware, in some implementations) may add the determined dependency information based on the above-described dependency analysis for each incoming command into a data structure and use the information in the data structure to construct and maintain an execution graph indicating an execution order of the commands. For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes. Next, in implementations wherein the GPU firmware is generating the execution graph, a background thread executing on the GPU's firmware may fetch commands from one or more command queues. The background execution thread may then fetch the encoded dependencies, along with actual command to launch on GPU. In implementations where the host CPU is generating the execution graph, the background execution thread on the GPU firmware may fetch only the actual commands to launch on GPU, e.g., in graph walk-order, from graph data structure of the execution graph. In some embodiments, the background execution thread on the GPU firmware may also perform additional pre-processing operations on the commands that are to be launched on GPU…According to some embodiments, each command in the execution graph may be associated with a wait count, where the wait count is indicative of the number of (e.g., zero or more) parent commands a particular (child) command depends on. Typically, the particular command can be executed on the GPU after execution of its parent commands has been completed (i.e., wait count=0) or if the particular command does not have any parents (e.g., is a root node where wait count is also zero)…As explained previously, commands fetched by firmware 520 from command queues 519.sub.0-519.sub.N may have various dependencies on each other. As a result, a particular execution order determined based on the dependency must be enforced while executing commands from command queues 519.sub.0-519.sub.N on GPU 530. One example of a dependency is when data generated by a first command (e.g., graphics or compute command or micro-command) is needed for processing a second command. This is also referred to herein as a Read After Write, or RAW, dependency. As such, GPU 530 may not be able to start execution of the second command until its prerequisite one or more (first) commands are completely processed. Lack of any dependency relationship between any two commands means both commands can be executed in parallel (or in any relative order, e.g., if the hardware is only capable of executing a single command at a time). Conversely, in order to enforce an ordering between two commands, associated dependency must be established. Commands of the same command queue may have dependencies, such that a child command of the queue is dependent upon execution of a parent command of the same queue. Commands belonging to different command queues may also have dependencies between each other…” Col. 5 Ln. 42-65, Col. 6 Ln. 1-9, Col. 12 Ln. 63-67, Col. 12 Ln. 1-18).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz with the teaching of Asthana because the teaching of Asthana would improve the system of Bolz by providing a technique for increasing GPU efficiency by reducing the amount of time the GPU stays idle while waiting for the next command for execution by reducing latency and increasing parallelism in the submission of graphics or computational commands.
As to claim 4, Asthana teaches the one or more processors of claim 1, wherein the one or more flags are further to indicate a current context of a device to be used to perform the subset of memory operations and the one or more other operations (For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes/Read After Write, or RAW, dependency) (“…In some embodiments, the host CPU may then encode the actual commands that are to be launched on the GPU hardware. Next, the host CPU (or GPU firmware, in some implementations) may add the determined dependency information based on the above-described dependency analysis for each incoming command into a data structure and use the information in the data structure to construct and maintain an execution graph indicating an execution order of the commands. For example, the execution graph may be a Directed Acyclic Graph (DAG), with each node representing a command and each edge representing a dependency or a parent-child relationship between the two connected nodes. Next, in implementations wherein the GPU firmware is generating the execution graph, a background thread executing on the GPU's firmware may fetch commands from one or more command queues. The background execution thread may then fetch the encoded dependencies, along with actual command to launch on GPU. In implementations where the host CPU is generating the execution graph, the background execution thread on the GPU firmware may fetch only the actual commands to launch on GPU, e.g., in graph walk-order, from graph data structure of the execution graph. In some embodiments, the background execution thread on the GPU firmware may also perform additional pre-processing operations on the commands that are to be launched on GPU…According to some embodiments, each command in the execution graph may be associated with a wait count, where the wait count is indicative of the number of (e.g., zero or more) parent commands a particular (child) command depends on. Typically, the particular command can be executed on the GPU after execution of its parent commands has been completed (i.e., wait count=0) or if the particular command does not have any parents (e.g., is a root node where wait count is also zero)…As explained previously, commands fetched by firmware 520 from command queues 519.sub.0-519.sub.N may have various dependencies on each other. As a result, a particular execution order determined based on the dependency must be enforced while executing commands from command queues 519.sub.0-519.sub.N on GPU 530. One example of a dependency is when data generated by a first command (e.g., graphics or compute command or micro-command) is needed for processing a second command. This is also referred to herein as a Read After Write, or RAW, dependency. As such, GPU 530 may not be able to start execution of the second command until its prerequisite one or more (first) commands are completely processed. Lack of any dependency relationship between any two commands means both commands can be executed in parallel (or in any relative order, e.g., if the hardware is only capable of executing a single command at a time). Conversely, in order to enforce an ordering between two commands, associated dependency must be established. Commands of the same command queue may have dependencies, such that a child command of the queue is dependent upon execution of a parent command of the same queue. Commands belonging to different command queues may also have dependencies between each other…” Col. 5 Ln. 42-65, Col. 6 Ln. 1-9, Col. 12 Ln. 63-67, Col. 12 Ln. 1-18).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz with the teaching of Asthana because the teaching of Asthana would improve the system of Bolz by providing a technique for increasing GPU efficiency by reducing the amount of time the GPU stays idle while waiting for the next command for execution by reducing latency and increasing parallelism in the submission of graphics or computational commands.
As to claim 7, Bolz teaches the one or more processors of claim 1, wherein the API is to cause the circuitry to receive one or more input values (memory barrier command) indicating a storage location (same memory location) to which the one or more flags are to be read (“…The memory barrier command provides stronger ordering of read and write operations performed by a single thread. When a memory barrier command is executed, any memory operations issued by the thread prior to the memory barrier command are guaranteed to be completed before any subsequent memory operations are performed. Memory barrier commands are needed for algorithms that allow multiple threads to access the same memory location. For such algorithms, memory operations associated with that memory location need to be performed in a partially-defined relative order…” paragraph 0040).
As to claims 8 and 15, see the rejection of claim 1 above, expect for one or more processors and memory.
Bolz teaches one or more processors (Central Processing Unit (CPU) 10) and memory (System memory 110).
As to claims 9 and 16, see the rejection of claim 2 above.
As to claims 10 and 17, see the rejection of claim 3 above.
As to claims 11 and 18, see the rejection of claim 4 above.
As to claim 14, see the rejection of claim 7 above.
As to claim 21, Asthana teaches the one or more processors of claim 1, wherein the circuitry is further to provide, in further response to the API call, the one or more flags to a device from which the API call was received (make calls to libraries, APIs) (“…In an embodiment, CPU 510 may, for example, be running a plurality of applications 510.sub.0-510.sub.N. Each of the plurality of applications, for example application 510.sub.0, may generate a plurality of commands (e.g., C.sub.00-C.sub.0N). In one embodiment, CPU 510 may issue instructions and make calls to libraries, APIs, and graphics subsystems to translate the high-level graphics instructions to graphics code (e.g., shader code) executable by GPU 530. The generated commands may be encoded and stored in priority-ordered command queues 519.sub.0-519.sub.N and communicated to firmware 520. In general, each application may have a set of priority-ordered command queues.” Col. 1 Ln. 41-60).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz with the teaching of Asthana because the teaching of Asthana would improve the system of Bolz by providing a technique for increasing GPU efficiency by reducing the amount of time the GPU stays idle while waiting for the next command for execution by reducing latency and increasing parallelism in the submission of graphics or computational commands.
As to claim 22, Asthana teaches the one or more processors of claim 1, wherein the GPU context further includes an additional one or more flags that indicate whether a kernel performing the subset of memory operations is to spin while waiting for results from the GPU (wait count) (“…According to some embodiments, each command in the execution graph may be associated with a wait count, where the wait count is indicative of the number of (e.g., zero or more) parent commands a particular (child) command depends on. Typically, the particular command can be executed on the GPU after execution of its parent commands has been completed (i.e., wait count=0) or if the particular command does not have any parents (e.g., is a root node where wait count is also zero)…” Col. 6 Ln. 1-9).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz with the teaching of Asthana because the teaching of Asthana would improve the system of Bolz by providing a technique for increasing GPU efficiency by reducing the amount of time the GPU stays idle while waiting for the next command for execution by reducing latency and increasing parallelism in the submission of graphics or computational commands.
As to claim 23, see the rejection of claim 21 above.
As to claim 24, see the rejection of claim 22 above.
As to claim 25, Bolz teaches the computer system of claim 15, wherein the computer system is further to provide, as a result of retrieval of the one or more flags, the one or more flags in further response to the API call (a stream of graphics API operations/a stream of graphics application programming interface (API) commands)…(memoryBarrierNV( ))).
Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2011/0063313 A1 to Bolz et al. in view of U.S. Pub. No. 11,055,812 A1 issued to Asthana as applied to claim 1, 8 and 15 above, and further in view of U.S. Pub. No. 2004/0153602 A1 to Lovet et al.
As to claim 5, Bolz as modified by Asthana teaches the one or more processors of claim 1, however it is silent with reference to wherein the one or more flags are further to indicate to the circuitry that at least one of the one or more memory operations is to be performed synchronously.
Lovet teaches wherein the one or more flags (flags) are further to indicate to the circuitry that at least one of the one or more memory operations is to be performed synchronously (synchronous) (“…That is, detection of asynchronous and synchronous memory access operations can be made without any externally supplied flags that instruct a memory device to expect either an asynchronous or synchronous memory access operation…” paragraph 0024).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz and Asthana with the teaching of Lovet because the teaching of Lovet would improve the system of Bolz and Asthana by providing a technique for performing computer instructions sequentially so that the instructions are orderly executed.
As to claims 12 and 19, see the rejection of claim 5 above.
Claims 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2011/0063313 A1 to Bolz et al. in view of U.S. Pub. No. 11,055,812 A1 issued to Asthana as applied to claims 1, 8 and 15 above, and further in view of U.S. Pub. No. 2020/0293367 A1 to Andrei et al.
As to claim 6, Bolz as modified by Asthana teaches the one or more processors of claim 1, however it is silent with reference to wherein the one or more flags are further to indicate to the circuitry that at least one of the one or more memory operations is to be performed asynchronously.
Andrei teaches wherein the indication comprises one or more flags are further to indicate to the circuitry that at least one of the one or more memory operations (Kernel #0/Kernel #1) is to be performed asynchronously (shared local memory) (“…As shown in FIG. 17B, during execution, the zeroth kernel 1701, first kernel 1702, and second kernel 1703 can perform operations on their respective input parameters and transfer data via the shared local memory. Kernel #1 can use the output data written by kernel #0 as at least a portion of the input used to perform its operations. Kernel #1 can then write at least a portion of its output to the shared local memory, which can be used by kernel #2. The output of kernel #2 can be written to device memory and may be subsequently copied to host memory. Kernel #0 and kernel #1 may also write at least a portion of their output to device memory. The amount of memory that is passed between kernels within shared local memory can vary based on the type of operations performed by the kernels and the size of the shared local memory. When the third kernel 1705 does not specify a dependency with kernel #0, kernel #1, and/or kernel #2 the engine block portion can then perform an operation 1704 to flush and/or clear the shared local memory before a third kernel 1705 (kernel #3) is executed…” paragraph 0225).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz and Asthana with the teaching of Andrei because the teaching of Andrei would improve the system of Bolz and Asthana by providing a method of inter-process communication (IPC) where multiple processes can access the same region of computer memory.
As to claims 13 and 20, see the rejection of claim 6 above.
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2011/0063313 A1 to Bolz et al. in view of U.S. Pub. No. 11,055,812 A1 issued to Asthana as applied to claim 15 above, and further in view of W.O. No. 2019001077 A1 to Yang et al.
As to claim 26, Bolz as modified by Asthana teaches the computer system of claim 15, however is silent with reference to wherein the GPU context further includes an additional one or more flags that indicate whether a kernel performing the subset of memory operations is to block a central processing unit (CPU) thread while waiting for the GPU to complete a task.
Yang teaches wherein the GPU context further includes an additional one or more flags (GPU barriers) that indicate whether a kernel performing the subset of memory operations is to block a central processing unit (CPU) thread (suspend the CPU threads) while waiting for the GPU to complete a task (“…A method and apparatus for controlling the synchronization of central processing unit (CPU) threads and graphics processing unit (GPU) threads, and a computer-readable storage medium, wherein said method comprises: when image frames are rendered by a GPU, creating GPU barriers for GPU threads, the initial state of the GPU barriers being a closed state (S101); creating signal events for CPU threads, the initial state of the signal events being a no signal state (S102); binding the GPU barriers and the signal events together (S103); calling a preset function to suspend the CPU threads, and waiting for the signal events to be changed to a signaled state (S104); when the image frames have been completely rendered by the GPU, opening the GPU barriers, and setting the signal events to a signaled state, thus waking up the CPU threads by means of the preset functions (S105). The described method has the technical effect of synchronizing program logic on Direct3D 12 with program images…” Abstract).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Bolz and Asthana with the teaching of Yang because the teaching of Yang would improve the system of Bolz and Asthana by providing a method and apparatus for controlling the synchronization of central processing unit (CPU) threads and graphics processing unit (GPU) threads to allow for a seamless computing (Yang Abstract).
Response to Arguments
Applicant’s arguments with respect to claims 1-4, 7-11, 14-18 and 21-26 have been considered but are moot because the new ground of rejection relies on additional reference not applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES E ANYA/Primary Examiner, Art Unit 2194