DETAILED ACTION
This office action is in response to an Amendment/Request for Reconsideration-After Non-Final Rejection filed 11/4/2025 for application 18/638,394 filed 4/17/2024.
Claims 1-6, and 8-20 have been amended. No claims have been cancelled. Claims 21-22 are new. Thus claims 1-22 have been examined.
The objections and rejections from the prior correspondence that are not restated herein are withdrawn.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Allowable Subject Matter
Claims 2 and 4-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 2, the claim adds the limitation ‘and the streaming engine circuitry is structured to buffer the array of data at the first, second, third, and fourth buffer addresses’ in light of its dependence on claim 1 that claims the first, second, third, and fourth buffer addresses contain the output of the computation. See claim 1 that claims ‘cause the streaming engine to copy the array of data from first, second, third, and fourth memory addresses of memory circuitry to a buffer; perform first operations using data in the buffer, the first operation to produce a first value at a first buffer address, a second value at a second buffer address, a third value at a third buffer address, and a fourth value at fourth buffer address…’
Thus the system further claims that the ‘first, second, third, and fourth memory address data that is copied to “a buffer” in claim 1 (which may be to any area of “a buffer” in claim 1, is further limited to state that the “a buffer” is at the first, second, third, and fourth buffer addresses where the computed value is placed.
This limitation, when combined with the limitations of claim 1 from which it depends, requires:
memory circuit containing an array of data containing four values
copying the memory circuit data to a buffer memory
performing an operation on the four values within the buffer memory, overwriting the buffer memory values with updated values
writing the updated values of the buffer memory so that two of the values are transposed, for example, if an original 4 by 4 array contains:
a b c d
e f g h
i j k l
m n o p
The system will write to the original Memory circuit addresses:
a e i m
b f j n
c g k o
d h l p
The first address may contain the value “a”, the fourth address may contain any other address on the diagonal such as the address of “p”. The second address may be any address on the diagonal such as the address of “b” and the third address may be the address it is swapped with such as the address of “e”.
The concept of reading in two values into an intermediate buffer and writing out the swapped values is known. For example Gayman (Gayman et al., 2017/0286293) reads two values to be swapped into an interim buffer and writes the swapped data back to the original location in order to perform wear leveling. However, as noted in paragraph [0036] of Gayman, only the values to be swapped to a new address are read into an interim buffer. Additional data, such as the claimed first and fourth data are not read into the interim buffer and subsequently written back out into their original locations. Thus Gayman does not teach or suggest performing operations on the four values in the buffer and ‘write the first value to the first memory address, the third value to the second memory address, the second value to the third memory address, and the fourth value to the fourth memory address in the memory circuitry’ within the context of the claims.
You (YOU, et al., 2022/0383935 teaches a Second Swap Circuit that reads address entries from a table, swaps two addresses, and writes the swapped address values back to the table to perform row hammer mitigation. However, similar to Gayman, You does not read data that is not to be swapped into a temporary buffer, perform operations on the data in the temporary buffer, and write data (including data whose address locations is not swapped) back to the address table.
Wang (Wang et al., US 2009/0016450 A1) teaches transcoding and array by a matrix transcoding method that swaps its rows with its column which is a well-known transcoding process. However, Wang does not teach or suggest performing operations on the four values in the buffer and ‘write the first value to the first memory address, the third value to the second memory address, the second value to the third memory address, and the fourth value to the fourth memory address in the memory circuitry’ within the context of the claims where the first, second and third values are computed based on earlier provided values..
Raut (Raut US 2022/0121506 A1) teaches transposing data where data at the first address and last address remain at the same location and intermediate address data is transposed (such as data at address 2 is swapped with data at address 5). However, Raut immediately writes the data to facilitate operations such as a FFT calculation and does not perform the calculation before writing the transposed data that has been produced using a FFT operation, for example the FFT operation of Usui (Usui US 20170262410 A1).
Guerrero (US 2006/0190517 A1) teaches reading and writing a transposed matrix and performing calculations on the data. Guerrero suggests the transpose and calculations may be performed in any order. However Guerrero does not disclose copying data from one memory into an intermediate buffer, writing calculated values to the initial addresses in the intermediate buffer. See Guerrero [0029] that the output of the processing operations (the computations) may be written to a file that may be stored or streamed and Guerrero [0039]-[0041] that discloses transcoding is an example of a processing operation which operates on a row by row basis. A row by row transcoding process would overwrite the column data in rows not yet transcoded, thus would not properly transcode the data unless it was transcoding the data to a separate memory target.
Li (Li et al., US 2020/0409664 A1) Fig. 4 and paras [0074]-[0094] teaches reading an array of data At Buffer Memory address 0 into a Systolic Array 430, performing an operation of the Systolic Array producing a Results Buffer, and writing a matrix transcoding of the results buffer to address N. Thus teaches the steps of reading data into a buffer, operating on the buffer, and writing the transcoding of the data to memory. However the data writing is not to the original addresses (the first memory address, the second memory address, etc.) that the data is read from. Thus Li does not explicitly teach write the first value to the first memory address, the third value to the second memory address, the second value to the third memory address, and the fourth value to the fourth memory address in the memory circuitry’ within the context of the claims.
Tan (Tan et al., US 2024/0111528) teaches memory circuit containing an array of data containing four values, copying the memory circuit data to a buffer memory, performing an operation, writing the updated values so that the values are transposed, and writing the results back to the original array of data location. However, Tan does not teach overwriting the buffer memory values with updated values. Instead, Tan first transposes the memory and places the results in a separate memory area (the PxP cell array 310 shown in Fig. 3), and then updates the transposed values before writing these results back to a Results Buffer Memory 112 shown in Fig. 1 the original memory circuit addresses. Thus Tan requires extra copies of the data.
Regarding claim 4, the prior art does not teach or suggest wherein the memory circuitry is first memory circuitry, the first, second, third, and fourth buffer addresses of the array of data correspond to the first memory location, the apparatus further comprising: second memory circuitry structured to store the array of data at a memory location; (Examiner notes the first to fourth address store the content of the computed values from the “array of data” per claim 1. This limitation adds a second memory circuitry to store the said array of data at a memory location. Thus may be a copy of the said array of data) … transfer the array of data from the second memory circuitry to the first memory circuitry (Thus the system transferring a copy of the said array of data containing computed values per claim 1 to the first memory circuitry that contains the results of the computation on the said array of data received in claim 1.)
An updated search failed to identify a teaching or suggestion of the above claim limitations that transfer a copy of the said array of claim 1 to the first memory circuitry.
Claims 5 and 6 are objected to but would be allowable based on their dependence from claim 4 that contains allowable subject matter.
Claims 8-20 are allowed.
The following is a statement of reasons for the indication of allowable subject matter.
Regarding claim 8, An apparatus comprising: memory circuitry structured to store an array of data; …the streaming engine circuitry structure to buffer data from the memory circuitry at first, second, third, and fourth buffer addresses; (thus data from the memory circuitry is stored at first, second, third, and fourth buffer addresses)… perform first operations using data in the buffer, the first operations to produce a first value at a first buffer address, a second value at a second buffer address, a third value at a third buffer address, and a fourth value at fourth buffer address; (thus the first to fourth buffer addresses hold both the initial data read in from the memory circuitry and the results of the computation). Claim 8 is allowable for the reasons cited in claim 2, specifically the prior art does not teach:
memory circuit containing an array of data containing four values
copying the memory circuit data to a buffer memory
performing an operation on the four values within the buffer memory, overwriting the buffer memory values with updated values
writing the updated values of the buffer memory so that two of the values are transposed,
Dependent claims 9-14 are allowed based on their dependence from allowed claim 8.
Regarding claim 15, the prior art does not teach or suggest ‘cause streaming engine circuitry to buffer at first, second, third, and fourth buffer addresses, data of an array of data in a buffer, the data of the array of data from first, second , third, and fourth memory addresses in memory circuitry
perform first calculations using the data in the buffer, the first calculations to produce a first value at the first buffer address, a second value at the second buffer address, a third value at athe third buffer address, and a fourth value at the fourth buffer address;
and write the first value to the first memory address, the third value to the second memory address, the second value to the third memory address, and the fourth value to the fourth memory address in the memory circuitry’
Claim 15 is allowable for the reasons cited in claim 2, specifically the prior art does not teach:
memory circuit containing an array of data containing four values
copying the memory circuit data to a buffer memory
performing an operation on the four values within the buffer memory, overwriting the buffer memory values with updated values
writing the updated values of the buffer memory so that two of the values are transposed,
Claims 16-20 are allowed based on their dependence form allowed claim 15.
Additionally regarding claims 13 and 19, the prior art does not teach or suggest ‘cause the streaming engine circuitry to buffer the second portion of the array of data from the second memory addresses in the memory circuitry responsive to writing first, second, third, and fourth values to the first memory addresses; within the context of the claim and claim 8 from which it depends. As detailed in paragraphs [0020] and [0021] of the instant application this step requires that loading a buffer with a second portion of the array of data to be transposed is caused by, or in response to, the steps of a first buffer being transposed as detailed in claim 8 and written back to its original memory addresses. Thus buffering a second set of data from a second portion of data is gated by, or done in response to, a first set of data from a first portion of data being transposed.
An updated search failed to identify a teaching, alone or in combination, to teach the above limitations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over
Tan (Tan et al., US 2024/0111528 A1) in view of St.Michael (an article titled Executing Commands in Memory: DRAM Commands by Stephen St. Michael published August 9, 2019.)
Regarding claim 1, Tan teaches An apparatus comprising: (Tan Fig. 13 that shows a Host System 1300 that contains an Acceleration Engine 1312 and is an example of an apparatus. Tan Fig. 2 and para [0029] that discloses a neural network accelerator 200 that may be a single chip package) memory circuitry structured to store an array of data; (Tan Fig. 13 that shows Processor Memory 1304, and Tan Fig. 2 and para [0030] discloses a plurality of memories, including State Buffer Memory 204, PxP cell Array 310, and Results Buffer Memory 112. Where the combination of these memory form memory circuitry. See also paras [0020]-[0021], [0026], [0029]-[0041], and [0093] that discloses the receives data from the Processor Memory 1304, system stores data in State Buffer Memory, sends the data to a transpose circuitry (i.e. Txpose Circuit 219a) to transpose rows and columns, and sends the transposed data to a compute engine to create results in Results Buffer Memory 112 and writes the transposed/computed data back to the corresponding buffer memory )
streaming engine circuitry coupled to the memory circuitry; (Consistent with para [0017] of the instant application, the streaming engine circuitry may be circuitry that controls communication between the programmable circuitry and the Memory attached to the buffer and a memory buffer associated with the transfer. Tan Figs. 1 and 3 and paras [0029] and [0089] discloses a host running a neural network accelerator 200 that may be implemented in software running on an integrated circuit, thus is an example of a streaming engine circuitry.)
and programmable circuitry coupled to the memory circuitry and the streaming engine circuitry, the programmable circuitry configured to at least one of execute or instantiate machine-readable instructions to at least: cause the streaming engine circuitry to (Tan [0021], [0029], [0089], and [0154] discloses the techniques may be implemented on a host running modules embodied on a non-transitory computer readable medium and processed by a processor.) copy the array of data from first, second, third and fourth memory addresses of the memory circuitry to a buffer (Tan [0093] discloses the system may begin the acceleration by copying data from Processor Memory 1304 into the Acceleration Engine. Tan Figs 1 & 2 and [0023]-[0037] discloses the inputted data is placed in the State Buffer Memory 104. Tan [0019] that discloses the system operates on data stored in a 2-dimensional tensor stored in a buffer memory and the transpose operation may place the row elements into the column elements, thus there are at least the same number of column elements as row elements in the state buffer memory 104. Tan [0023] discloses there may be 128 row elements. Thus there may be a first (element at row 0, column 0 address), second (element at row 0, column 1 address), third (element at row 1, column 0 address) and forth (element at row 127, column 127 address).
perform first operations using data in the buffer, the first operations to produce a first value at a first buffer address, a second value at a second buffer address, a third value at a third buffer address, and a fourth value at fourth buffer address; (Tan Fig. 2 and para [0025] discloses the compute channels may perform computations on the data elements. Tan [0024] discloses that the multiplication results (i.e. computation results) can be written back to the State Buffer Memory 104, thus producing a first value at a first buffer location, a second value at a second buffer location, etc.).
and write the first value to the first memory address, the third value to the second memory address, the second value to the third memory address, and the fourth value to the fourth memory address in the memory circuitry. (Tan Figs. 2&3 and [0029]-[0041], notable para [0025]-[0026], discloses the compute channels may perform computations on the data elements. For example the compute channel may scale each data being streamed into the compute channel. Following the computation, the computed values are written back to the corresponding row partition of the state buffer memory 104 (to first to fourth memory locations). Tan will perform the computation on the upper left and lower right elements and the will be written back to their original position after the transpose that swaps rows and column. And Tan will place the computed value of the original second element (from row 0, column 1) at row 1, column 0 and will place the computed value of the original third element (from row 1, column 0 address) at row 0, column 1. Thus will place the third value to the second memory address, the second value to the third memory address. See the diagram below that shows the effect of transposing the data, computing a value on the data, and writing back the computed & transposed to the same memory.
PNG
media_image1.png
656
1182
media_image1.png
Greyscale
Tan teaches accessing an array of memory elements that may be a DRAM that is accessed by addresses. However Tan does not disclose how each element is addressed. Thus Tan does not explicitly teach … to at least: cause the streaming engine circuitry to copy the array of data from first, second, third and fourth memory addresses of the memory circuitry to a buffer; perform first operations using data in the buffer, the first operations to produce a first value at a first buffer address, a second value at a second buffer address, a third value at a third buffer address, and a fourth value at fourth buffer address; and write the first value to the first memory address, the third value to the second memory address, the second value to the third memory address, and the fourth value to the fourth memory address in the memory circuitry.
St.Michael, of a similar field of endeavor, further teaches to at least: cause the streaming engine circuitry to copy the array of data from first, second, third and fourth memory addresses of the memory circuitry to a buffer; perform first operations using data in the buffer, the first operations to produce a first value at a first buffer address, a second value at a second buffer address, a third value at a third buffer address, and a fourth value at fourth buffer address; and write the first value to the first memory address, the third value to the second memory address, the second value to the third memory address, and the fourth value to the fourth memory address in the memory circuitry. (St.Michael, page 5, lines 1-30 teaches that DRAM memory elements are address according to their row and column address combination, Thus each row/column address such as the first (element at row 0, column 0 address), second (element at row 0, column 1 address), third (element at row 1, column 0 address) and forth (element at row 127, column 127) are all access based on their row & column address combination and the first operation produces a first value at a first buffer address, a second value at a second buffer address, etc.).
Tan and St.Michael are in a similar field of endeavor as both are directed to accessing data stored in row/column arrays. Thus it would have been obvious to a person of ordinary skill in the art before the effectively filed date of the claimed invention to incorporate the memory address using a row & column address as taught by St.Michael into the solution of Tan that accesses memory stored in a 2 dimension memory that represents rows and columns, thus combining prior art elements according to known elements to yield predictable results (access row/column data using standard DRAM access techniques that access the contents of 2 dimension memory arrays using a row address and a column address).
Regarding claim 3, Tan and St.Michael teaches all of the limitations of claim 1 above.
Tan in view of St.Michael further teaches wherein the array of data is a first portion of the array of data, the first, second, third, and fourth memory addresses of the first portion of the array of data correspond to a first memory location, (Tan Fig. 2 and paras [0029]-[0037] that shows Row Group 204a that contains P row partitions sent to a first set of P compute channels 221a that is an example of a first portion of the array of data that may have 4 memory entries (i.e. 4 memory addresses in the solution of Tan in view of St.Michael.)
the array of data further having a second portion at a second memory location in the memory circuitry, (Tan Fig. 2 and paras [0029]-[0037] shows Row Group 204b that contains P row partitions sent to a second set of P compute channels 221b that is an example of a second portion of the array of data at a second memory location in the memory circuitry.)
and the programmable circuitry is further configured to: (Tan [0089])
copy the second portion of the array of data from the second memory location in the memory circuitry to the first, second , third, and fourth buffer addresses of the buffer for processing the second portion of the array of data and writing first, second, third, and fourth values to the second memory location in the memory circuitry and writing the transpose of the first portion of the array of data to the first memory location; (This limitation is simply a repeat of claim 1 applied to a second group of data in the source memory array. Tan Figs. 2 and 3 and paras [0029]-[0041] discloses the system may obtain the P row partitions (for a second Row Group 204b), transpose the data at a second Transpose Circuit 219b, and perform computations using a dedicated P compute channel for Txpose Circuit 219b) and write the results to the original source rows.)
Regarding claim 21, Tan and St.Michael teaches all of the limitations of claim 1 above. Tan further teaches, wherein the programmable circuitry is further configured to: cause the streaming engine circuitry to (Tan [0021], [0029], [0089], and [0154] discloses the techniques may be implemented on a host running modules embodied on a non-transitory computer readable medium and processed by a processor.)
copy the first, second, third, and fourth values from the first, second, third, and fourth memory addresses in the memory circuitry to the first, second, third, and fourth buffer addresses in the buffer; (Tan [0093] discloses the system may begin the acceleration by copying data from Processor Memory 1304 into the Acceleration Engine. Tan Figs 1 & 2 and [0023]-[0037] discloses the inputted data is placed in the State Buffer Memory 104. Tan [0019] that discloses the system operates on data stored in a 2-dimensional tensor stored in a buffer memory and the transpose operation may place the row elements into the column elements, thus there are at least the same number of column elements as row elements in the state buffer memory 104. Tan [0023] discloses there may be 128 row elements. Thus there may be a first (element at row 0, column 0 address), second (element at row 0, column 1 address), third (element at row 1, column 0 address) and forth (element at row 127, column 127 address).
and perform second operations using data in the buffer. (Tan [0023]-[0037] discloses the system may transpose the data in the buffer, thus performs a second operation. )
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over (Tan et al., 2024/0111528 A1) in view of St.Michael (an article titled Executing Commands in Memory: DRAM Commands by Stephen St. Michael published August 9, 2019.) as detailed in claims 1 and 8 respectively and further in view of Blixt (Stefan Blixt US 2026/0010492 A1)
Regarding claim 7, Tan and St.Michael teaches all of the limitations of claim 1 above. Tan further teaches the memory circuitry is first memory circuitry, (Tan Fig. 1 shows State Buffer Memory 104) and the apparatus further comprising: second memory circuitry coupled to the first memory circuitry; (Tan Figs. 1 and 3 and para [0038] shows the transpose circuit may contain a P x P cell array 310 that contains the transpose of the data from the memory partitions. Tan [0130] discloses various components communicate over a chip interconnect which is an example of data router circuitry that connects the two memory circuitry.)
However, the combination does not explicitly teach wherein the array of data is radar data, and analog front-end circuitry coupled to the second memory circuitry, the analog front-end circuitry configured to generate the radar data.
Blixt, of a similar field of endeavor, further discloses wherein the array of data is radar data, (Blixt [0101]-[0102] teaches an accelerator that performs transposes memory and performs calculations on the memory may be for radar applications …and analog front-end circuitry coupled to the second memory circuitry, the analog front-end circuitry configured to generate the radar data. (Blixt [0275] that the input to the accelerator can be a set of A/D converters (analog to digital converters) of a radar RF unit where the A/D converters is an example of analog front-end circuitry coupled to the system accelerator system and the front end circuitry generates the digital radar data.)
Tan, St.Michael, and Blixt are all in a similar field as all relate to managing data stored in a row/column matrix. Thus it would have been obvious to a person of ordinary skill in the art before the effectively filed date of the claimed limitations to incorporate the matrix transpose [0284][0285] operation and scaling computation of the solution of Tan and St.Michael into the solution of Blixt that transposes radar data that needs to be scaled. See Blixt [0246] and [0284]. Thus combining prior art elements according to known methods to yield predictable results (performing the transposing and scaling of radar data as taught by Blixt) into the solution of Tan and St.Michael) to yield predictable results (to offload the transposing and scaling of radar data from a main processor onto an accelerator, thus reducing the load on the main processor that no longer has to perform these resource intensive operations).
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over
Tan (Tan et al., US 2024/0111528 A1) in view of St.Michael (an article titled Executing Commands in Memory: DRAM Commands by Stephen St. Michael published August 9, 2019.) as detailed in claim 21 above and further in view of Searcy (SEARCY et al., US 2016/0033631 A1).
Regarding claim 22, Tan and St.Michael teaches all of the limitations of claim 21 above.
However the combination does not explicitly teach wherein the array is radar data, the first operations are a range fast Fourier transform (FFT), and the second operations are a doppler FFT.
Searcy, of a similar field of endeavor further teaches wherein the array is radar data, and a Doppler FFT) the first operations are a range fast Fourier transform (FFT), and the second operations are a doppler FFT. (Searcy [0002][0003] discloses the solution is directed to radar systems using a Fast Chirp Waveform that generates arrays of data produced by a Range Fast Fourier Transform (FFT) and is followed by a Doppler FFT. Tan [0032] discloses that data in the results buffer memory can become input to the compute engine. Thus the output of the Range Fast Fourier Transform (FFT) in the results buffer of Tan in view of St.Michael and Searcy can be sent to the compute engine for Doppler FFT processing. )
Tan, St.Michael, and Searcy are all in a similar field of endeavor as all relate to managing arrays of data. Thus it would have been obvious to a person of ordinary skill in the art before the effectively filed date of the claimed invention to incorporate the Range Fast Fourier Transform and the Doppler FFT as taught by Searcy into the solution of Tan and St.Michael that enables computations of arrays of data, thus combining prior art elements (the two computations of Searcy into the solution of Tan and St.Michael that performs computation) to yield predictable results (enable the solution of Tan and St.Michael to detect one or more objects relative to a vehicle, as well as estimate the size and classification of elements (determine if the target of the radar input is a vehicle versus a pedestrian). See Searcy [00017].)
that data being processed is output of the Range FFTs that are input to a Doppler FFT in order to determine the Doppler value of an object within a particular range bin to achieve a particular level of accuracy.
Response to Remarks
Examiner thanks applicant for their claim amendments and remarks of 11/4/2025. They have been fully considered.
Examiner agrees that Kano, Mutlu, and Robertson do not teach or suggests independent claims 1, 8, and 15 as currently presented. Therefore the rejection has been withdrawn. However, upon further consideration, and in response to the claims as amended, a new ground(s) of rejection is made for claim 1 based on newly cited Tan (Tan et al., US 2024/0111528 A1) in view of St.Michael (an article titled Executing Commands in Memory: DRAM Commands by Stephen St. Michael published August 9, 2019) as detailed above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANICE M. GIROUARD whose telephone number is (469)295-9131. The examiner can normally be reached M-F 9:30 - 7:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim Vo can be reached at 571-272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JANICE M. GIROUARD/Examiner, Art Unit 2138