DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Other Ref: Lee (US 20160203091) – Memory controller and memory system … fetch from host (Abstract).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1,2-4,6,12,13,14-16,18,20 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty (US 10387078 B1) and in view of Kamble (US 20160217104 A1) and further in view of Muthiah (US 20190370168 A1) and Benisty (US 20170090753)(hereinafter “Benisty753”)
Claim 1. Benisty discloses A device (eg., col 5:42-43 Fig. 1, a data storage system 100 ) comprising:
processing circuitry configured to:
fetch data from one or more submission queues of a host device (eg., col 5:46-50 Fig. 1 - receives data from the host device 102 (via the submission queue 104)),
receive first feedback information from (eg., col 5:3-25 - the NVM data storage controller reports (or otherwise exposes) a different head pointer to the host than the one tracked by the NVM data storage controller. ; indicates the submission queue is already three-quarters full. Herein, the queue depth reported to (or exposed to) the host may be referred to as the “effective queue depth,; col 9:11-20 - parameters may vary dynamically with device usage and may be updated periodically or continuously by the data storage device, such as the usage level of the queue by the host and the average data transfer size of each command. The queue usage level may be assessed, for example, by determining an average or mean value for the number of entries in the host submission queue, or by assessing how often the host uses the queue, etc.; col 9:23-25),
control transfer of the data from the one or more submission queues to based on the first feedback information (eg., col 4:60-65 - controller adaptively and dynamically throttles the rate by which a host device inserts commands into NVMe submission queues to prevent timeouts or exception conditions.),
Benisty does not disclose, but Kamble discloses
one or more internal queues; the one or more internal queues (eg., 0056 - a host processor 304 can issue a queue command, … and then can be RDMA′d into a submission queue ring buffer 604. )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty with Kamble, providing the benefit of host based non-volatile memory clustering using network mapped storage (see Kamble, 0001) allows the full performance of the NVM to be used in a network infrastructure (0005) a memory controller 308 that can be connected to the plurality of NVM storage devices 302 via a NVM interface 310, a NIC 502 that can be connected to the memory controller 308 and that can be configured to communicate across a network 512 with other nodes in a NVM cluster (0055).
Benisty in view of Kamble does not disclose, but Muthiah discloses
receive second feedback information from a memory device coupled to the processing circuitry, and control the transfer of the data from the one or more internal queues to the memory device based on the second feedback information (eg., 0022 - receiving feedback from the backend module; and determining an order in which to send the plurality of commands to the backend module based on feedback from the backend module.; 0029 - a controller comprising back end means for processing and sending a command to the memory.. determining an order in which to send the plurality of commands to the back end means based on feedback from the back end mean; to integrate with quest of a host device - 0049 - the back end module 110 provides feedback to the front end module 108 to assist it in determining the order in which to send commands to the back end module 110.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, providing the benefit of receive a plurality of commands from a host and determine an order in which to send the plurality of commands to the set of components based on feedback from the set of components (see Muthiah, 0012).
Benisty in view of Kamble and Muthiah does not disclose, but Benisty753 discloses
implement an arbitration and parsing function (eg.,. 0024 - Submission queue selector 404 utilizes the submission queue statistics and the storage device resource state to identify one of submission queues 112.sub.1-112.sub.n from which the next command to be processed is selected and provide input to fetcher 406 that identifies the selected queue.)
At a rate controlled by the arbitration and parsing function (eg., 0030 - intelligent fetching also improves host utilization of a nonvolatile storage device because the nonvolatile storage device may process commands from the host faster than in implementations where round robin or weighted round robin command fetching only is used )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty, with Kamble, with Muthiah, with Benisty753 providing the benefit of intelligent selection so that the queue is not included or passed over in the current round of round robin or weighted round robin selection (see Benisty753, 0024).
Claim 2. Benisty discloses wherein the first feedback information includes credit information that controls the transfer of data from the one or more submission queues to the one or more internal queues (eg., col 4:65 – col 5:10 - adaptive throttling may be achieved by providing the host with a head pointer for a submission queue that differs from the actual head pointer maintained within the NVM data storage controller, with the adjusted head pointer set by the NVM data storage controller to throttle submissions by the host to the NVM device).
Claim 3. Benisty discloses wherein the credit information comprises information indicating an amount of resources used by the data transferred from the one or more submission queues to the one or more internal queues, and wherein the processing circuitry is further configured to adjust the rate based on the first feedback information (eg., col 5:5-17 - NVM data storage controller reports a head pointer (via a completion queue entry posted in a separate completion queue) that indicates there is relatively little submission queue depth currently available. The host device, in accordance with its normal processing, then stops sending (or reduces the rate of) new memory access requests to the NVM controller via the submission queues).
Benisty in view of Kamble and Muthiah does not disclose, but Benisty753 discloses
rate controlled by the arbitration and parsing function (eg., 0030 - intelligent fetching also improves host utilization of a nonvolatile storage device because the nonvolatile storage device may process commands from the host faster than in implementations where round robin or weighted round robin command fetching only is used )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty, with Kamble, with Muthiah, with Benisty753 providing the benefit of intelligent selection so that the queue is not included or passed over in the current round of round robin or weighted round robin selection (see Benisty753, 0024).
Claim 4. Benisty discloses wherein the first feedback information indicates a rate change for changing a transfer rate at which data is transferred from the one or more submission queues to the one or more internal queues (eg., col 5:14-17 - The host device, in accordance with its normal processing, then stops sending (or reduces the rate of) new memory access requests to the NVM controller).
Claim 6. Benisty discloses wherein the rate change decreases the transfer rate of the data from the one or more submission queues to the one or more internal queues (eg., col 5:11-14 - host device, in accordance with its normal processing, then stops sending (or reduces the rate of) new memory access requests to the NVM controller via the submission queues).
Claim 12. Benisty discloses wherein the data comprises one or more commands (eg., 0018 – commands in submission queues).
Claim 13. Benisty discloses A method performed by at least one processor, the method (eg., col 5:42-43 Fig. 1, a data storage system 100 ) comprising:
fetching data from one or more submission queues of a host device (eg., col 5:46-50 Fig. 1 - receives data from the host device 102 (via the submission queue 104)),
receiving first feedback information from (eg., col 5:3-25 - the NVM data storage controller reports (or otherwise exposes) a different head pointer to the host than the one tracked by the NVM data storage controller. ; indicates the submission queue is already three-quarters full. Herein, the queue depth reported to (or exposed to) the host may be referred to as the “effective queue depth,; col 9:11-20 - parameters may vary dynamically with device usage and may be updated periodically or continuously by the data storage device, such as the usage level of the queue by the host and the average data transfer size of each command. The queue usage level may be assessed, for example, by determining an average or mean value for the number of entries in the host submission queue, or by assessing how often the host uses the queue, etc.; col 9:23-25),
controlling transfer of the data from the one or more submission queues to based on the first feedback information (eg., col 4:60-65 - controller adaptively and dynamically throttles the rate by which a host device inserts commands into NVMe submission queues to prevent timeouts or exception conditions.),
Benisty does not disclose, but Kamble discloses
one or more internal queues; the one or more internal queues (eg., 0056 - a host processor 304 can issue a queue command, … and then can be RDMA′d into a submission queue ring buffer 604. )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty with Kamble, providing the benefit of host based non-volatile memory clustering using network mapped storage (see Kamble, 0001) allows the full performance of the NVM to be used in a network infrastructure (0005) a memory controller 308 that can be connected to the plurality of NVM storage devices 302 via a NVM interface 310, a NIC 502 that can be connected to the memory controller 308 and that can be configured to communicate across a network 512 with other nodes in a NVM cluster (0055).
Benisty in view of Kamble does not disclose, but Muthiah discloses
receive second feedback information from a memory device coupled to the processor, and controlling the transfer of the data from the one or more internal queues to the memory device based on the second feedback information (eg., 0022 - receiving feedback from the backend module; and determining an order in which to send the plurality of commands to the backend module based on feedback from the backend module.; 0029 - a controller comprising back end means for processing and sending a command to the memory.. determining an order in which to send the plurality of commands to the back end means based on feedback from the back end mean; to integrate with quest of a host device - 0049 - the back end module 110 provides feedback to the front end module 108 to assist it in determining the order in which to send commands to the back end module 110.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, providing the benefit of receive a plurality of commands from a host and determine an order in which to send the plurality of commands to the set of components based on feedback from the set of components (see Muthiah, 0012).
Benisty in view of Kamble and Muthiah does not disclose, but Benisty753 discloses
implement an arbitration and parsing function (eg.,. 0024 - Submission queue selector 404 utilizes the submission queue statistics and the storage device resource state to identify one of submission queues 112.sub.1-112.sub.n from which the next command to be processed is selected and provide input to fetcher 406 that identifies the selected queue.)
At a rate controlled by the arbitration and parsing function (eg., 0030 - intelligent fetching also improves host utilization of a nonvolatile storage device because the nonvolatile storage device may process commands from the host faster than in implementations where round robin or weighted round robin command fetching only is used )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty, with Kamble, with Muthiah, with Benisty753 providing the benefit of intelligent selection so that the queue is not included or passed over in the current round of round robin or weighted round robin selection (see Benisty753, 0024).
Claim 14 is rejected for reasons similar to Claim 2 above.
Claim 15 is rejected for reasons similar to Claim 3 above.
Claim 16 is rejected for reasons similar to Claim 4 above.
Claim 18 is rejected for reasons similar to Claim 6 above.
Claim 20. Benisty discloses A non-transitory computer readable medium having instructions stored therein, which when executed by a processor cause the processor to perform a method (eg., col 5:42-43 Fig. 1, a data storage system 100 ) comprising:
fetching data from one or more submission queues of a host device (eg., col 5:46-50 Fig. 1 - receives data from the host device 102 (via the submission queue 104)),
receiving first feedback information from (eg., col 5:3-25 - the NVM data storage controller reports (or otherwise exposes) a different head pointer to the host than the one tracked by the NVM data storage controller. ; indicates the submission queue is already three-quarters full. Herein, the queue depth reported to (or exposed to) the host may be referred to as the “effective queue depth,; col 9:11-20 - parameters may vary dynamically with device usage and may be updated periodically or continuously by the data storage device, such as the usage level of the queue by the host and the average data transfer size of each command. The queue usage level may be assessed, for example, by determining an average or mean value for the number of entries in the host submission queue, or by assessing how often the host uses the queue, etc.; col 9:23-25),
controlling transfer of the data from the one or more submission queues to based on the first feedback information (eg., col 4:60-65 - controller adaptively and dynamically throttles the rate by which a host device inserts commands into NVMe submission queues to prevent timeouts or exception conditions.),
Benisty does not disclose, but Kamble discloses
one or more internal queues; the one or more internal queues (eg., 0056 - a host processor 304 can issue a queue command, … and then can be RDMA′d into a submission queue ring buffer 604. )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty with Kamble, providing the benefit of host based non-volatile memory clustering using network mapped storage (see Kamble, 0001) allows the full performance of the NVM to be used in a network infrastructure (0005) a memory controller 308 that can be connected to the plurality of NVM storage devices 302 via a NVM interface 310, a NIC 502 that can be connected to the memory controller 308 and that can be configured to communicate across a network 512 with other nodes in a NVM cluster (0055).
Benisty in view of Kamble does not disclose, but Muthiah discloses
receiving second feedback information from a memory device coupled to the processor, and controlling the transfer of the data from the one or more internal queues to the memory device based on the second feedback information (eg., 0022 - receiving feedback from the backend module; and determining an order in which to send the plurality of commands to the backend module based on feedback from the backend module.; 0029 - a controller comprising back end means for processing and sending a command to the memory.. determining an order in which to send the plurality of commands to the back end means based on feedback from the back end mean; to integrate with quest of a host device - 0049 - the back end module 110 provides feedback to the front end module 108 to assist it in determining the order in which to send commands to the back end module 110.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, providing the benefit of receive a plurality of commands from a host and determine an order in which to send the plurality of commands to the set of components based on feedback from the set of components (see Muthiah, 0012).
Benisty in view of Kamble and Muthiah does not disclose, but Benisty753 discloses
implementing an arbitration and parsing function (eg.,. 0024 - Submission queue selector 404 utilizes the submission queue statistics and the storage device resource state to identify one of submission queues 112.sub.1-112.sub.n from which the next command to be processed is selected and provide input to fetcher 406 that identifies the selected queue.)
At a rate controlled by the arbitration and parsing function (eg., 0030 - intelligent fetching also improves host utilization of a nonvolatile storage device because the nonvolatile storage device may process commands from the host faster than in implementations where round robin or weighted round robin command fetching only is used )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty, with Kamble, with Muthiah, with Benisty753 providing the benefit of intelligent selection so that the queue is not included or passed over in the current round of round robin or weighted round robin selection (see Benisty753, 0024).
Claims 5, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty (US 10387078 B1) and in view of Kamble (US 20160217104 A1) and further in view of Muthiah (US 20190370168 A1) , Benisty753 (cited above) and Trika (US 20240095074 A1)
Claim 5. Benisty in view of Kamble and Muthiah and Benisty753 does not disclose, but Trika discloses
wherein the rate change increases the transfer rate of the data from the one or more submission queues to the one or more internal queues (eg., col 16:10-12 - the NVM data storage controller throttles the host in a more aggressive manner (eg., 0028 - requests from submission queues having higher WRRCs are submitted to the fulfillment processor and fulfilled at a greater rate than requests from submission queues having lower WRRCs. ).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, and Benisty753 and Trika providing the benefit of receive a plurality of commands from a host and determine an order in which to send the plurality of c promote fairness to the clients while simultaneously allowing full utilization of the storage hardware performance (see Trika, 0006).
Claim 17 is rejected for reasons similar to Claim 5 above.
Claims 7, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty (US 10387078 B1) and in view of Kamble (US 20160217104 A1) and further in view of Muthiah (US 20190370168 A1) Benisty753 (cited above) and Saund (US 20140310437 A1)
Claim 7. Benisty in view of Kamble and Muthiah and Benisty753 does not disclose, but Saund discloses
wherein the processing circuitry is further configured to implement a deficit weight round robin scheduling algorithm for the transfer of the data to and from the one or more internal command queues based on the second feedback information (eg.,0026 - arbiters 20 may implement a deficit-weighted round robin arbitration scheme and thus may be programmable in the configuration registers 24 with weights for each input port and/or transaction priority/QoS level/virtual channel.; 0034 - controller 22 may include various queues for buffering memory operations, data for the operations, etc., and the circuitry to sequence the operations and access the memory 12 according to the interface defined for the memory 12.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, and Benisty753 and Saund providing the benefit of bandwidth characteristics than those expected from the configuration of the weights (see Saund, 0006) transactions may be more frequently selected to fill the available bandwidth (0009).
Claim 19 is rejected for reasons similar to Claim 7 above.
Claims 8, 9 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty (US 10387078 B1) and in view of Kamble (US 20160217104 A1) and further in view of Muthiah (US 20190370168 A1) Benisty753 (cited above), and Saund (US 20140310437 A1) and Horspool (US 20220261183 A1)
Claim 8. Benisty in view of Kamble and Muthiah and Benisty753 does not disclose, but Saund discloses
based on the deficit weight round robin algorithm (eg.,0026 - arbiters 20 may implement a deficit-weighted round robin arbitration scheme and thus may be programmable in the configuration registers 24 with weights for each input port and/or transaction priority/QoS level/virtual channel.; 0034 - controller 22 may include various queues for buffering memory operations, data for the operations, etc., and the circuitry to sequence the operations and access the memory 12 according to the interface defined for the memory 12.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, and Benisty753 and Saund providing the benefit of bandwidth characteristics than those expected from the configuration of the weights (see Saund, 0006) transactions may be more frequently selected to fill the available bandwidth (0009).
Benisty in view of Kamble and Muthiah and Benisty753 and Saund does not disclose, but Horspool discloses
, is further configured to control the transfer of data from the one or more internal command queues based on performance monitoring information after completion of a command (eg., 0030 - After the driver completes processing a group of completion queue entries, it signals this to the SSD 120 by writing the completion queue's updated head pointer to the respective completion queue 226-229.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, and Benisty753 and Saund with Horspool, providing the benefit of dynamically prioritize command submission queues from a host so as to maximize throughput and improve performance of the SSD (see Horspool, 0001).
Claim 9. Benisty in view of Kamble and Muthiah and Benisty753 and Saund does not disclose, but Horspool discloses
wherein the processing circuitry is further configured to delay an update of a head pointer associated with at least one queue from the one or more submission queues based on the first feedback information. eg., 0030 - After the driver completes processing a group of completion queue entries, it signals this to the SSD 120 by writing the completion queue's updated head pointer to the respective completion queue 226-229.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, and Benisty753 and Saund with Horspool, providing the benefit of dynamically prioritize command submission queues from a host so as to maximize throughput and improve performance of the SSD (see Horspool, 0001).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Benisty (US 10387078 B1) and in view of Kamble (US 20160217104 A1) and further in view of Muthiah (US 20190370168 A1) Benisty753 (cited above), Saund (US 20140310437 A1) and Horspool (US 20220261183 A1)
Claim 10. Benisty in view of Kamble and Muthiah and Benisty753 and Saund does not disclose, but Horspool discloses
wherein the second feedback information comprises information indicating an amount of resources used by the data transferred from the one or more internal queues to the memory device (eg., 0047 - NVMe controller 132 evaluates the size of in-flight data in the LPQs 414, 424, 443-445, and HPQs 412, 422, 440-442 (both individually for each queue and in total summation) associated with all commands from the particular submission queue currently selected by arbitration, and determines if the size of each queue exceeds (i.e. is more than) an individual queue threshold or the total size of all queues exceeds a total queue threshold that has been predetermined by the user.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, and Benisty753 and Saund with Horspool, providing the benefit of dynamically prioritize command submission queues from a host so as to maximize throughput and improve performance of the SSD (see Horspool, 0001).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Benisty (US 10387078 B1) and in view of Kamble (US 20160217104 A1) and further in view of Muthiah (US 20190370168 A1) Benisty753 (cited above) and Horspool (US 20220261183 A1) and Saund (US 20140310437 A1)
Claim 11. Benisty in view of Kamble and Muthiah and Benisty753 does not disclose, but Saund discloses
wherein the second feedback information comprises information indicating an amount of memory bandwidth used by the data transferred from the one or more internal queues to the memory device and an amount (eg., 0031 - weights may represent relative amounts of bandwidth to be assigned to different sources/priorities/QoS levels/virtual channels. For example, the weights may be transaction counts. Alternatively, the weights may be data counts and may be consumed based on the size of the transaction.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty,with Kamble, with Muthiah, and Benisty753 and Saund providing the benefit of bandwidth characteristics than those expected from the configuration of the weights (see Saund, 0006) transactions may be more frequently selected to fill the available bandwidth (0009).
Response to Arguments
Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive.
For claims 1, 13 and 20, Applicant argues that that the cited references do not disclose the amended limitations. The Office disagrees.
In the present OA, the updated combination of references render the amended limitations as obvious.
Specifically, Benisty in view of Kamble and Muthiah does not disclose, but Benisty753 discloses
implement an arbitration and parsing function (eg.,. 0024 - Submission queue selector 404 utilizes the submission queue statistics and the storage device resource state to identify one of submission queues 112.sub.1-112.sub.n from which the next command to be processed is selected and provide input to fetcher 406 that identifies the selected queue.)
At a rate controlled by the arbitration and parsing function (eg., 0030 - intelligent fetching also improves host utilization of a nonvolatile storage device because the nonvolatile storage device may process commands from the host faster than in implementations where round robin or weighted round robin command fetching only is used )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the data storage system with submission queues at the host as disclosed by Benisty, with Kamble, with Muthiah, with Benisty753 providing the benefit of intelligent selection so that the queue is not included or passed over in the current round of round robin or weighted round robin selection (see Benisty753, 0024).
Applicant’s arguments for dependent claims 2-12, 14-19 are based on their respective base independent claims 1 and 13, which are addressed above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM SAIN whose telephone number is (571)270-3555. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached at 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GAUTAM SAIN/Primary Examiner, Art Unit 2135