DETAILED ACTION
Response to Amendment
This communication is responsive to the amendment filed on 12/1/2025. Claims 1-32 are pending and have been examined. Claims 1-7, 9-12, 15, 17-20, 23, 25-28 and 31 have been amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 6-10, 14-18, 22-26 and 30-32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Memon, PGPUB No. 2019/0114254, Nickolls, USPAT No. 7,788,468 and further in view of Duluk, USPAT No. 7,526,634 (cited on 892 filed on 8/3/2023).
In regards to claim 1, Memon discloses One or more processors ([0020 and Fig. 1]: wherein CPUs (element 210) are disclosed) comprising circuitry, in response to an application programming interface ("API") call ([0020-0022 and 0023-0025]: wherein APIs (element 305) ) to: receive an identifier of instructions of a first stream to be performed in parallel using one or more processing units ([0016, 0019, 0022 and 0030]: wherein a CUDA API receives parameters (identifiers) of instructions of a stream of execution of an application that offloads task to one or more GPUs (see [0020-0027 and 0072] for further details on CUDA API parallel programming and API calls)) determine whether the one or more processing units include computing resources available to perform the instructions of the first stream ([0022, 0028, 0030 and 0045]: wherein the CUDA API calls get a response which indicates whether the one or more GPUs include enough available memory to perform the instructions of the application tasks and thus the CUDA API determines if the one or more GPUs have enough available memory) assign an available portion of memory of the one or more processing units to the first stream to be used to synchronize the instructions ([0022, 0025, 0028 and 0045]: wherein the available portion of memory (5GB for example) of a GPU is allocated (assigned) and used to synchronize instructions ([0033 for more details of synchronizing)) and as a result of determining that the one or more processing units include at least the computing resources, use the portion of memory to perform the instructions of the first stream concurrently using the one or more processing units. ([0022, 0025, 0028, 0045 and 0072]: wherein the portion of memory (5GB) is used to perform the instructions of the offloaded task and the instructions are performed in parallel using the one or more GPUs (see [0020-0025] for further details on CUDA API parallel programming))
Memon does not explicitly disclose an application programming interface ("API") to: receive an identifier of a first set of two or more dependent instructions of a first stream to be performed concurrently with at least a second set of two or more dependent instructions of a second stream using one or more processing units; determine whether the one or more processing units include computing resources available to perform at least the first set of two or more dependent instructions of the first stream; synchronizing the first set of two or more dependent instructions; and as a result of determining that the one or more processing units include at least the computing resources, use memory to perform instructions concurrently using the one or more processing units. Memon discloses using CUDA APIs (i.e. parallel programming APIs) to run application synchronized tasks on one or more GPUs, based on determining available memory to run the tasks. However, Memon does not explicitly disclose a CUDA API performing sets of instructions concurrently based on available computing resources.
Nickolls discloses an application programming interface ("API") to: receive an identifier of a first set of two or more instructions to be performed concurrently with at least a second set of two or more instructions using one or more processing units (Column 25, lines 35-67 to Column 26, lines 1-58: wherein API receives a reference (identifier) to a CTA (cooperative thread array) program to be performed using one or more cores of a GPU. Wherein multiple cooperative thread arrays can be executed concurrently based on API and each thread array includes multiple threads including multiple instructions. Thus, the API receives a reference to a CTA program that includes a first set of two or more instructions (first thread group/array) to be performed concurrently with a second set of two or more instructions (second thread group/array)) determine whether the one or more processing units include computing resources available to perform at least the first set of two or more instructions (Column 19, lines 40-46, Column 26, lines 59-67 to Column 27, lines 1-17: wherein its determined if enough register file space, SIMD groups, etc. are available to perform a first set of two or more instructions (first thread group/array)) assign an available portion of memory of the one or more processing units (Column 13, lines 2-10: wherein register file space is allocated for each processing engine) and as a result of determining that the one or more processing units include at least the computing resources, cause the first set of two or more instructions to be performed concurrently with at least the second set of two or more instructions using the one or more processing units. (Column 19, lines 40-46, Column 26, lines 59-67 to Column 27, lines 1-17: wherein as a result of the one or more cores including at least enough register file space, SIMD groups, etc. the first thread array and second thread array can be executed concurrently using the one or more cores)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the CUDA API launched in Memon to launch cooperative threads arrays as the API of Nickolls. One of ordinary skill in the art would see that Memon discloses using NVIDIA CUDA APIs including API calls to perform parallel computing on GPUs. While, Nickolls, which is owned by NVIDIA, discloses using an API to launch cooperative thread arrays for parallel computing on GPUs. Thus, it would have been obvious to one of ordinary skill in the art because it would have been the simple substitution of one known element (using an API to launch cooperative thread arrays for parallel computing as taught in Nickolls) for another (using a generic API for parallel computing as taught in Memon) to obtain predictable results (using an NVIDIA API to launch cooperative threads arrays for parallel computing) (MPEP 2143, Example B). Furthermore, using cooperative thread arrays are advantageously employed to perform computations that lend themselves to a data parallel decomposition (Nickolls: Column 5, lines 15-45 and Column 7, lines 38-48).
The combination of Memon and Nickolls does not disclose a first set of two or more dependent instructions of a first stream to be performed concurrently with at least a second set of two or more dependent instructions of a second stream using one or more processing units nor synchronizing the first set of two or more dependent instructions. Nickolls discloses using an API to launch a plurality of cooperative thread arrays for parallel execution, thus Nickolls discloses using a GPU to perform a first thread group of instructions and second thread group of instructions in parallel. However, Nickolls does not disclose that the first and second thread groups include dependent instructions of first and second instruction streams which are synchronized.
Duluk discloses a first set of two or more dependent instructions of a first stream to be performed concurrently with at least a second set of two or more dependent instructions of a second stream using one or more processing units. (Column 1, lines 55-67, Column 4, lines 61-66, Column 7, lines 54-67 to Column 8, lines 1-10: wherein a first set of dependent instructions (a first CTA of set 320 dependent upon a CTA of set 310) is performed concurrently with a second set of dependent instructions (a second CTA of set 320 dependent upon a CTA of 310) using a multithreaded core array (See Column 16, lines 21-23 and 40-41: wherein it is disclosed that CTA’s are launched using stream launches. Thus, the CTA’s can be of a first and second stream launch)) synchronizing the first set of two or more dependent instructions (Column 2, lines 1-11 and Column 8, lines 12-25: wherein synchronizing of a dependent CTA including instructions is disclosed)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the first and second thread arrays/groups of Memon and Nickolls to be dependent first and second thread arrays/groups as taught in Duluk. Thus, it would have been obvious to one of ordinary skill in the art because it would have been the simple substitution of one known element (executing dependent thread groups, including dependent instructions, in parallel as taught in Duluk) for another (executing generic threads groups, including generic instructions, in parallel as taught in Nickolls) to obtain predictable results (executing dependent cooperative threads arrays for parallel computing) (MPEP 2143, Example B). Furthermore, it would have been advantageous to synchronize CTAs on a GPU for streams of execution launched by CPU as to keep GPU busy (Duluk: Column 2, lines 1-28).
Claim 9 is similarly rejected on the same basis as claim 1 above as claim 9 is the method claim corresponding to the processor of claim 1 above.
Claim 17 is similarly rejected on the same basis as claim 1 above as claim 17 is the system claim corresponding to the processor of claim 1 above. (Claim 17 differs from claim 1 in that it states “A computer system comprising one or more processors and memory storing executable instructions that, as a result of being performed by the one or more processors”. However, this limitation is additionally taught by Memon (See Figs. 1 and corresponding disclosure))
Claim 25 is similarly rejected on the same basis as claim 1 above as claim 25 is the machine-readable medium claim corresponding to the processor of claim 1 above. (Claim 25 differs claim 1 in that it states “A non-transitory machine-readable medium having stored thereon a set of instructions, which are performed by one or more processors”. However, the limitation is taught by Memon paragraphs [0015-0021 and 0044])
In regards to claim 2, the overall combination of Memon, Nickolls and Duluk discloses “The one or more processors of claim 1” (see rejection of claim 1 above) “wherein the API is a driver stored in memory of a computer system” (Memon [0015-0016 and 0024-0025] |Nickolls: Column 25, lines 35-50)
Claim 10 is similarly rejected on the same basis as claim 2 above as claim 10 is the method claim corresponding to the processor of claim 2 above.
Claim 18 is similarly rejected on the same basis as claim 2 above as claim 18 is the system claim corresponding to the processor of claim 2 above.
Claim 26 is similarly rejected on the same basis as claim 2 above as claim 26 is the machine-readable medium claim corresponding to the processor of claim 2 above.
In regards to claim 6, the overall combination of Memon, Nickolls and Duluk discloses “The one or more processors of claim 1” (see rejection of claim 1 above) “wherein the first two or more dependent instructions of the first stream interacts with one or more other sets two or more dependent instructions of the second stream” (Duluk: Column 7, lines 54-67 to Column 8, lines 1-10 and Column 9, lines 53-67: wherein a first CTA, including dependent threads of instructions interacts or executes in parallel with one or more other CTAs, including dependent threads of instructions, of set 320 (See Fig. 3) (See Column 16, lines 21-23 and 40-41: wherein it is disclosed that CTA’s are launched using stream launches. Thus, the CTA’s can be of a first and second stream launch))) “by accessing shared memory, obtaining status of the one or more other two or more dependent instructions of the second stream, waiting for the one or more other sets of two or more dependent instructions of the second stream, or sending or receiving data from the one or more other sets of two or more dependent instructions of second stream.” (Duluk: Column 5, lines 57-67 and Column 6, lines 53-57: wherein dependent threads of instructions of a CTA share data with other CTAs of dependent threads using shared memory (See Column 16, lines 21-23 and 40-41: wherein it is disclosed that CTA’s are launched using stream launches. Thus, the CTA’s can be of a first and second stream launch)))
Claim 14 is similarly rejected on the same basis as claim 6 above as claim 14 is the method claim corresponding to the processor of claim 6 above.
Claim 22 is similarly rejected on the same basis as claim 6 above as claim 22 is the system claim corresponding to the processor of claim 6 above.
Claim 30 is similarly rejected on the same basis as claim 6 above as claim 30 is the machine-readable medium claim corresponding to the processor of claim 6 above.
In regards to claim 7, the overall combination of Memon, Nickolls and Duluk discloses “The one or more processors of claim 1” (see rejection of claim 1 above) “wherein the circuitry prevents the first set of two or more dependent instructions of the first stream from being performed as a result of determining that there are insufficient resources available to perform the first set of two or more dependent instructions of the first stream concurrently” (Memon [0028, 0030 and 0045]: wherein request to execute tasks on GPU is denied if insufficient memory is available on GPU| Nickolls: Column 19, lines 40-46, Column 26, lines 59-67 to Column 27, lines 1-17: wherein its determined if enough register file space, SIMD groups, etc. are available to perform a first set of two or more instructions (first thread group/array) in a GPU core. If enough resources are not available the first thread group/array is not executed at that time| Duluk: Column 5, lines 40-58, Column 8, lines 1-10, Column 11, lines 43-48 and Column 13, lines 4-8: wherein a first CTA of set 320, including two or more dependent threads of instructions, are prevented from being performed in parallel depending upon GPU resources. For example, CTAs of a set may be performed serially if insufficient GPU resources are available to perform the CTAs of the set in parallel. (See Column 16, lines 21-23 and 40-41: wherein it is disclosed that CTAs are launched using stream launches. Thus, the CTA’s can be of a first and second stream launch)))
Claim 15 is similarly rejected on the same basis as claim 7 above as claim 15 is the method claim corresponding to the processor of claim 7 above.
Claim 23 is similarly rejected on the same basis as claim 7 above as claim 23 is the system claim corresponding to the processor of claim 7 above.
Claim 31 is similarly rejected on the same basis as claim 7 above as claim 31 is the machine-readable medium claim corresponding to the processor of claim 7 above.
In regards to claim 8, the overall combination of Memon, Nickolls and Duluk discloses “The one or more processors of claim 7” (see rejection of claim 7 above) “wherein the resources include one or more of a register file, a memory, a shared memory, or a processor core” (Memon [0028, 0030 and 0045]: wherein the resource is memory |Nickolls: Column 26, lines 59-67 to Column 27, lines 1-17: wherein the resources are register file space, SIMD groups, etc. |Duluk: Column 5, lines 50-58 and Column 6, lines 23-30: wherein resources include a processor core and a register file space)
Claim 16 is similarly rejected on the same basis as claim 8 above as claim 16 is the method claim corresponding to the processor of claim 8 above.
Claim 24 is similarly rejected on the same basis as claim 8 above as claim 24 is the system claim corresponding to the processor of claim 8 above.
Claim 32 is similarly rejected on the same basis as claim 8 above as claim 32 is the machine-readable medium claim corresponding to the processor of claim 8 above.
Claim(s) 3-5, 11-13, 19-21 and 27-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Memon, Nickolls, Duluk and further in view of NPL reference “CUDA C PROGRAMMING GUIDE”, hereby referred to as CUDA (cited on 892 filed on 3/27/2024).
In regards to claim 3, the overall combination of Memon, Nickolls and Duluk discloses “The one or more processors of claim 1” (see rejection of claim 1 above) wherein the first set of two or more dependent instructions of the first stream and one or more other sets of two or more dependent instructions of the second stream (Duluk: Column 1, lines 55-67, Column 4, lines 61-66, Column 7, lines 54-67 to Column 8, lines 1-10: wherein a first set of dependent instructions (a first CTA of set 320 dependent upon a CTA of set 310) is performed concurrently with other sets of dependent instructions (another CTA of set 320 dependent upon a CTA of 310) using a multithreaded core array (See Column 16, lines 21-23 and 40-41: wherein it is disclosed that CTAs are launched using stream launches. Thus, the CTA’s can be of a first and second stream launch)))
The combination of Memon, Nickolls and Duluk does not disclose the first set of two or more instructions and other set of two or more instructions are co-resident in memory of a graphics processing unit ("GPU"). Duluk does disclose a plurality of cooperative thread arrays interacting through shared memory. However, the reference does not explicitly state the cooperative threads arrays are co-resident in memory of a GPU.
CUDA discloses wherein the first set of two or more instructions and the other sets of two or more instructions are co-resident in memory of a graphics processing unit ("GPU") (CUDA: pages 6, 9-12 and page 153: wherein cooperative thread blocks/groups are co-resident in memory of GPU (note: Nickolls and Duluk disclose the first and other sets of two or more instructions and therefore the combination of references discloses the above limitation))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the cooperative thread groups of Nickolls and Duluk to be co-resident in the memory of a GPU as taught in CUDA. It would have been obvious to one of ordinary skill in the art because allowing thread groups to be co-resident allows GPU to efficiently utilize resources (i.e. by sharing resources) and memory access (i.e. having multiple blocks, the GPU can allow GPU to switch between them to hide the latency of memory operations. If one block is waiting for data from global memory, the SM can switch to another block that is ready to compute, thus maximizing throughput).
Claim 11 is similarly rejected on the same basis as claim 3 above as claim 11 is the method claim corresponding to the processor of claim 3 above.
Claim 19 is similarly rejected on the same basis as claim 3 above as claim 19 is the system claim corresponding to the processor of claim 3 above.
Claim 27 is similarly rejected on the same basis as claim 3 above as claim 27 is the machine-readable medium claim corresponding to the processor of claim 3 above.
In regards to claim 4, the overall combination of Memon, Nickolls and Duluk discloses “The one or more processors of claim 1” (see rejection of claim 1 above) wherein the first set of two or more dependent instructions of the first stream are part of a first group of two or more threads (Nickolls: column 7, lines 23-67: wherein a first cooperative thread array is disclosed| Duluk: Column 7, lines 54-67 to Column 8, lines 1-10 and Column 9, lines 53-lines 67: wherein the first two or more dependent instructions are part of a first CTA including two or more threads See (Column 16, lines 21-23 and 40-41: wherein it is disclosed that CTAs are launched using stream launches. Thus, the CTA’s can be of a first and second stream launch))
The combination of Memon, Nickolls and Duluk does not disclose the circuitry is to cause the first group of two or more threads to be co-resident at a first point in time. Nickolls and Duluk disclose a first cooperative thread array including a first group of two or more threads. However, the references do not explicitly state the cooperative thread array is co-resident in memory of a GPU.
CUDA discloses the circuitry is to cause the first group of two or more threads to be co-resident at a first point in time (CUDA: pages 6, 9-12 and page 153: wherein cooperative thread blocks/groups, including a first thread block including two or more threads, are co-resident in memory of GPU at some (first) point in time)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the cooperative thread groups of Nickolls and Duluk to be co-resident in the memory of a GPU as taught in CUDA. It would have been obvious to one of ordinary skill in the art because allowing thread groups to be co-resident allows GPU to efficiently utilize resources (i.e. by sharing resources) and memory access (i.e. having multiple blocks, the GPU can allow GPU to switch between them to hide the latency of memory operations. If one block is waiting for data from global memory, the SM can switch to another block that is ready to compute, thus maximizing throughput).
Claim 12 is similarly rejected on the same basis as claim 4 above as claim 12 is the method claim corresponding to the processor of claim 4 above.
In regards to claim 5, the overall combination of Memon, Nickolls and Duluk discloses “The one or more processors of claim 1” (see rejection of claim 1 above) wherein: one or more other sets of two or more dependent instructions of the second stream are part of a second group of threads (Duluk: Column 7, lines 54-67 to Column 8, lines 1-10 and Column 9, lines 53-lines 67: wherein another set of two or more dependent instructions are part of a another CTA including two or more threads (See Column 16, lines 21-23 and 40-41: wherein it is disclosed that CTAs are launched using stream launches. Thus, the CTA’s can be of a first and second stream launch))
The combination of Memon, Nickolls and Duluk does not disclose a second group of co-resident threads; circuitry is to cause the second group of co-resident threads to be co-resident at a second point in time; and co-residency allows each thread in the second group of co-resident threads to interact with at least one other thread in the second group of co-resident threads.
CUDA discloses a second group of co-resident threads (CUDA: pages 6, 9-12 and 153: wherein a co-resident thread block is disclosed) circuitry cause the second group of co-resident threads to be co-resident at a second point in time (CUDA: pages 6, 9-12 and page 153: wherein cooperative thread blocks/groups, including a second thread block including two or more threads, are co-resident in memory of GPU at some (second) point in time) and co-residency allows each thread in the second group of co-resident threads to interact with at least one other thread in the second group of co-resident threads (CUDA: pages 8-12, 153 and 157-158: wherein co-residency allows each thread in the second thread block to interact (operate in parallel or access shared memory) with at least one other thread in the second block)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the cooperative thread groups of Nickolls and Duluk to be co-resident in the memory of a GPU as taught in CUDA. It would have been obvious to one of ordinary skill in the art because allowing thread groups to be co-resident allows GPU to efficiently utilize resources (i.e. by sharing resources) and memory access (i.e. having multiple blocks, the GPU can allow GPU to switch between them to hide the latency of memory operations. If one block is waiting for data from global memory, the SM can switch to another block that is ready to compute, thus maximizing throughput).
Claim 13 is similarly rejected on the same basis as claim 5 above as claim 13 is the method claim corresponding to the processor of claim 5 above.
Claim 20 is similarly rejected on the same basis as claims 4 and 5 as claim 20 is the system claim corresponding to the processor including limitations of claims 4 and 5.
Claim 21 is similarly rejected on the same basis as claim 5 above as claim 21 is the system claim corresponding to the processor of claim 5 above.
Claim 28 is similarly rejected on the same basis as claims 4 and 5 as claim 28 is the machine-readable medium claim corresponding to the processor including limitations of claims 4 and 5.
Claim 29 is similarly rejected on the same basis as claim 5 as claim 29 is the machine-readable medium claim corresponding to the processor including limitations of claim 5.
Examiner Notes
The examiner suggests applicant amend the method claim to remove contingent limitations as to avoid any contingent limitation interpretations in future office actions. (See MPEP 2111.04(II) See Ex parte Schulhauser, Appeal 2013-007847 (PTAB April 28, 2016)). Thus, the examiner suggests amending the method of claim 9 to state “…receiving and in response to the API call receiving an identifier…. determining that .
Response to Arguments
Applicant's arguments filed on 12/1/2025 have been fully considered but they are not persuasive. Therefore, the rejection(s) of independent claims made in view of Memon, Nickolls and Duluk have been maintained.
Claims 2-8, 10-16, 18-24 and 26-32 are argued at least based upon their respective dependencies from claims 1, 9, 17 and 25 and therefore remain rejected at least based on their respective dependencies from claims 1, 9, 17 and 25.
Applicant argues the 103 rejections on page 11 of the remarks filed on 12/1/2025, in the substance that:
“At best, the cited portions of Memon disclose "[t]he standard response to the call will include an indication of whether the chosen coprocessor is able to meet the requirements, in particular, for memory ... if the chosen coprocessor has insufficient memory, then either the request is denied, or some slower memory technology with enough space is called into use." Memon at [0045]. However, Applicant respectfully submits that neither one of Memon, Nickols, and/or Duluk, alone or combined, appear to teach or disclose "assign[ing] an available portion of memory of the one or more processing units to the first stream to be used to synchronize the first set of two or more dependent instructions; and as a result of determining that the one or more processing units include at least the computing resources, us[ing] the portion of memory to perform the first set of two or more dependent instructions of the first stream concurrently with at least the second set of two or more dependent instructions of the second stream using the one or more processing units" as recited in amended claim 1.
Therefore, Applicant respectfully submits that amended claim 1 is allowable under 35 U.S.C. § 103 over Memon in view Nickols and/or Duluk. Withdrawal of the pending rejection of amended claim 1 under 35 U.S.C. § 103 is, therefore, respectfully requested.”
The examiner respectfully disagrees with the above assertions because the combination of references would disclose the amended claim limitations. The examiner first directs applicant to Memon paragraph [0022] which discloses API’s synchronizing streams of execution between coprocessors and a host system, and allocating memory. In addition, Memon paragraphs [0028 and 0045] disclose that in cases of insufficient memory slower memory technology can be used in combination with the portion of memory that is available on the coprocessor. For example, a portion of memory (e.g. 5GB) of a coprocessor can be allocated (assigned) in combination with the use of higher/slower memories. Thus, Memon discloses assigning (allocating) a portion of available memory of one or more coprocessors to process a first stream of execution to be synchronized. Further, this portion of memory would be used to perform parallel GPU processing.
Furthermore, the first and second sets of two or more dependent instructions of streams is taught by Duluk; and additionally, Duluk discloses synchronizing of a first set of dependent instructions. For example, Duluk Column 16, lines 15-67 discloses a stream including multiple CTA’s or a CTA, thus a first and second stream would include a CTA which includes two or more dependent instructions (Note: the CTA’s include dependencies (See Column 7, lines 63-67 to Column 8, lines 1-10)). Furthermore, Duluk: abstract and Column 8, lines 11-24 discloses synchronization of sets of CTAs including dependent instructions and therefore discloses synchronizing a first set of two or more dependent instructions.
Therefore, the combination of Memon and Duluk discloses the amended claim limitations.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY P SPANN whose telephone number is (571)431-0692. The examiner can normally be reached M-F, 9am-6pm, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COURTNEY P SPANN/ Primary Examiner, Art Unit 2183