DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Other Ref: Vlaiko (US 20170242606) Different Stroage devices using Host Memory Buffer (HMB)
Allowable Subject Matter
Claims 8 and 12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims (subject to the rejections below).
REASONS FOR ALLOWANCE
The following is an examiner’s statement of reasons for allowance:
For Claim 8 , the prior art discloses and/or renders obvious the limitations from Claims 1, 5 and 6. The prior art does not appear to disclose the limitations from Claim 8.
For Claim 12, the prior art discloses and/or renders obvious the limitations from Claims 1 and 11. The prior art does not appear to disclose the limitations from Claims 12.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claims 1, 17 and 20, the recited temporary storage device and one or more storage devices would have been indefinite to one of ordinary skill in the art at the time of the invention. One of ordinary skill would not have known the metes and bounds of temporary storage device and one or more storage devices and whether the temporary storage device is included in the one or more storage devices or is a temporary storage device a new storage device that is not include in, and separate from, the one or more storage devices. Correction/Clarification is required to clarify the scope of temporary storage device and one or more storage devices.
Claims 2-16, 18-19 are rejected based on dependency from claims 1, 17, respectively.
In claim 13, the recited likely to be read, would have been indefinite to one of ordinary skill in the art at the time of the invention. One of ordinary skill would not have known the metes and bounds of likely to be read, and how this limitation is manifested in the technology. For example, the recited likely to be read is indicated by a auto-generated flag or variable but under an alternate scenario, the recited likely to be read is inputted by a user, without any determination by computer technology. Correction is required to clarify the scope of likely to be read.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5,11,13,17,18,19,20 are rejected under 35 U.S.C. 103 as being unpatentable over Hahn (US 20160246726) and in view of Katagiri (US 20230094946)
Claim 1. Hahn discloses A memory sub-system (eg., [0020] FIG. 2 exemplary operating environment in which adaptive HMB caching ) comprising:
a set of memory components (eg., [0025] Nonvolatile storage 208 may comprise the physical memory cells where data is stored. For example, in the case of flash memory, nonvolatile storage 208 may include NAND or NOR flash memory cells in two-dimensional) ;
one or more storage devices (eg., 0026 Fig. 1 - SRAM on the memory controller of storage device 200, ); and
at least one processing device operatively coupled to the set of memory components and the one or more storage devices, the at least one processing device configured to perform operations comprising: (eg., 0026 Fig. 3, FIG. 2, adaptive HMB caching module 300 and hint derivation module 301 may comprise hardware or firmware components of storage device 200 that reside on the storage device side of host interface 202. )
accessing data that identifies a host memory buffer (HMB) portion of a temporary storage device that has been allocated to the memory sub- system by a host (eg., [0016] an adaptive HMB caching module maintains a tiered structure where portions of FTL data are stored in the HMB cache and other portions are stored in primary storage on the storage device and in nonvolatile storage on the storage device; 0023 - device to asynchronously and directly access host memory. In the illustrated example, host interface 202 includes an HMB interface 203 for interfacing with HMB 204 across host memory bus 205. HMB 204 is stored in host DRAM 206);
and a second set of physical storage locations on the HMB (eg., 0071 - storage device further includes an adaptive host memory buffer (HMB) caching module for using the hints to identify portions of the table to cache in the HMB and for caching the identified portions in the HMB).
Hahn does not disclose, but Katagiri discloses
generating a virtual address space associated with the memory sub- system, the virtual address space comprising a first set of physical storage locations on the one or more storage devices (eg., [0055] The internal buffer 62 is a storage area in which user data is temporarily stored. The internal buffer 62 temporarily stores data associated with a write command received from the host 2. The internal buffer 62 temporarily stores data that has been read from the NAND memory 5 based on a read command received from the host 2.; 0054 - The LUT cache 61 has a plurality of entries each of which can store one or more pieces of address translation information. The address translation information stored in each of the entries includes, for example, a plurality of physical addresses that correspond to consecutive logical addresses, respectively. The LUT cache 61 stores a dirty flag for each piece of address translation information stored in the LUT cache 61.)
performing one or more memory operations on user data received from the host using the virtual address space and the set of memory components (eg., 0035 - The write command includes information specifying a logical address to which the data stored in the memory 22 to be written, a data pointer indicating the storage area in which the data is stored in the memory 22; 0096 - controller 4 reads the management data of the SSD 3 from each read command data buffer).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, providing the benefit of SSDs require a technique which can effectively use a memory of a host (see Katagiri, 0005) management of data stored in the NAND memory 5 and management of blocks included in the NAND memory 5 as a flash translation layer (FTL). The management of data stored in the NAND memory 5 includes, for example, management of mapping information which is information indicating correspondences between each logical address and each physical address of the NAND memory 5. (0044).
Claim 2. Hahn discloses wherein the one or more storage devices comprise at least one static random access memory (SRAM) device or a first dynamic random access memory (DRAM) device (eg., 0016 - primary level cache 102 that is maintained in SRAM on a memory controller of the storage device), and wherein the set of memory components comprises non-volatile memory devices (eg., 0018 - storage 106 that is maintained in NAND or nonvolatile storage of the storage device).
Claim 3. Hahn discloses wherein the temporary storage device comprises a second DRAM device (eg., 0017 - cache 104 may be stored in host DRAM ), and wherein the non-volatile memory devices comprise NAND storage devices (eg., 0018 - storage 106 that is maintained in NAND or nonvolatile storage of the storage device).
Claim 4. Hahn does not disclose, but Katagiri discloses
the operations comprising storing the user data received from the host on the HMB (eg., [0034] Parts of the storage area of the memory 22 are used as read command data buffers 223-1, 223-2, . . . One read command data buffer 223 is associated with one read command. )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, providing the benefit of SSDs require a technique which can effectively use a memory of a host (see Katagiri, 0005) management of data stored in the NAND memory 5 and management of blocks included in the NAND memory 5 as a flash translation layer (FTL). The management of data stored in the NAND memory 5 includes, for example, management of mapping information which is information indicating correspondences between each logical address and each physical address of the NAND memory 5. (0044).
Claim 5. Hahn discloses the operations comprising: receiving, from the host, a request to program the user data (eg., [0034] Referring to FIG. 5, in step 500, an I/O command is received. The I/O command may be a read command or a write command received);
storing, on a first portion of the first set of physical storage locations, a mapping between a set of logical addresses associated with the request and a set of physical addresses on the set of memory components (eg., [0024] address translation module 207 translates from the address space by the host to the address space used by storage device 200 to access nonvolatile storage 208.); and
caching, on a second portion of the first set of physical storage locations, the user data prior to programming the user data to the set of physical addresses on the set of memory components (eg., Table 1 before para 0035 Fig. 3 - move at least some of the FTL table entries for the 4K movie file to FTL cache 102.)
Claim 11. Hahn discloses the operations comprising: receiving, from the host, a request to read the user data from a set of logical addresses (eg., 0028 – I/O command is a memory read command received by storage device 200);
searching, the first set of physical storage locations, to identify a set of physical addresses on the set of memory components mapped to the set of logical addresses (eg., 0016 - the adaptive HMB caching module preferable only places FTL data in primary FTL cache 102 that is currently being accessed or likely to be accessed in the next few operations by the host.; 0034 - In step 502, it is determined whether or not a hint already exists for the LBA range in the I/O command);
retrieving, from the set of memory components, the user data stored in the set of physical addresses (eg., 0040 - The I/O command may be a read command or a write command regarding a specific LBA range); and
caching, on a portion of the second set of physical storage locations on the HMB, the user data that has been retrieved from the set of memory components (eg., 0037 - , the read command may be executed and FTL entries associated with any unread portions of the file that are expected to be read next may be cached in either the primary or secondary HMB caches; 0040 - if the data is expected to be heavily read but not written often, it may be grouped together with other “hot read” data to reduce read scrub copies).
Claim 13. Hahn discloses the operations comprising:
predicting a set of physical addresses on the set of memory components that is likely to be read by the host (eg., 0028 - if the FTL data expected to be needed in the near future is not in either the primary or secondary FTL cache);
retrieving, from the set of memory components, a set of user data stored in the set of physical addresses (eg., 0040 - if the data is expected to be heavily read but not written often, it may be grouped together with other “hot read” data to reduce read scrub copies of data which is relatively static.); and
caching, on a portion of the second set of physical storage locations on the HMB, the set of user data that has been retrieved from the set of physical addresses that is predicted to be read by the host (eg., 0040 - Applying the hint may also include caching FTL table entries in either the primary or secondary FTL caches for data that is expected to be read in the near future.).
Claim 17. Hahn discloses method (eg., [0020] FIG. 2 exemplary operating environment in which adaptive HMB caching ) comprising:
a set of memory components of the memory sub-system (eg., [0025] Nonvolatile storage 208 may comprise the physical memory cells where data is stored. For example, in the case of flash memory, nonvolatile storage 208 may include NAND or NOR flash memory cells in two-dimensional) ;
one or more storage devices of the memory sub-system (eg., 0026 Fig. 1 - SRAM on the memory controller of storage device 200, ); and
at least one processing device operatively coupled to the set of memory components and the one or more storage devices, the at least one processing device configured to perform operations comprising: (eg., 0026 Fig. 3, FIG. 2, adaptive HMB caching module 300 and hint derivation module 301 may comprise hardware or firmware components of storage device 200 that reside on the storage device side of host interface 202. )
accessing data that identifies a host memory buffer (HMB) portion of a temporary storage device that has been allocated to a memory sub-system by a host (eg., [0016] an adaptive HMB caching module maintains a tiered structure where portions of FTL data are stored in the HMB cache and other portions are stored in primary storage on the storage device and in nonvolatile storage on the storage device; 0023 - device to asynchronously and directly access host memory. In the illustrated example, host interface 202 includes an HMB interface 203 for interfacing with HMB 204 across host memory bus 205. HMB 204 is stored in host DRAM 206);
and a second set of physical storage locations on the HMB (eg., 0071 - storage device further includes an adaptive host memory buffer (HMB) caching module for using the hints to identify portions of the table to cache in the HMB and for caching the identified portions in the HMB).
Hahn does not disclose, but Katagiri discloses
generating a virtual address space associated with the memory sub-system, the virtual address space comprising a first set of physical storage locations on one or more storage devices of the memory sub-system and (eg., [0055] The internal buffer 62 is a storage area in which user data is temporarily stored. The internal buffer 62 temporarily stores data associated with a write command received from the host 2. The internal buffer 62 temporarily stores data that has been read from the NAND memory 5 based on a read command received from the host 2.; 0054 - The LUT cache 61 has a plurality of entries each of which can store one or more pieces of address translation information. The address translation information stored in each of the entries includes, for example, a plurality of physical addresses that correspond to consecutive logical addresses, respectively. The LUT cache 61 stores a dirty flag for each piece of address translation information stored in the LUT cache 61.)
performing one or more memory operations on user data received from the host using the virtual address space and a set of memory components of the memory sub-system (eg., 0035 - The write command includes information specifying a logical address to which the data stored in the memory 22 to be written, a data pointer indicating the storage area in which the data is stored in the memory 22; 0096 - controller 4 reads the management data of the SSD 3 from each read command data buffer).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, providing the benefit of SSDs require a technique which can effectively use a memory of a host (see Katagiri, 0005) management of data stored in the NAND memory 5 and management of blocks included in the NAND memory 5 as a flash translation layer (FTL). The management of data stored in the NAND memory 5 includes, for example, management of mapping information which is information indicating correspondences between each logical address and each physical address of the NAND memory 5. (0044).
Claim 18 is rejected for reasons similar to Claim 2 above.
Claim 19 is rejected for reasons similar to Claim 3 above.
Claim 20. Hahn discloses A non-transitory computer-readable storage medium comprising instructions that, when executed by at least one processing device, cause the at least one processing device to perform operations comprising: (eg., [0020] FIG. 2 exemplary operating environment in which adaptive HMB caching ) comprising:
a set of memory components of the memory sub-system (eg., [0025] Nonvolatile storage 208 may comprise the physical memory cells where data is stored. For example, in the case of flash memory, nonvolatile storage 208 may include NAND or NOR flash memory cells in two-dimensional) ;
one or more storage devices of the memory sub-system (eg., 0026 Fig. 1 - SRAM on the memory controller of storage device 200, ); and
at least one processing device operatively coupled to the set of memory components and the one or more storage devices, the at least one processing device configured to perform operations comprising: (eg., 0026 Fig. 3, FIG. 2, adaptive HMB caching module 300 and hint derivation module 301 may comprise hardware or firmware components of storage device 200 that reside on the storage device side of host interface 202. )
accessing data that identifies a host memory buffer (HMB) portion of a temporary storage device that has been allocated to a memory sub-system by a host (eg., [0016] an adaptive HMB caching module maintains a tiered structure where portions of FTL data are stored in the HMB cache and other portions are stored in primary storage on the storage device and in nonvolatile storage on the storage device; 0023 - device to asynchronously and directly access host memory. In the illustrated example, host interface 202 includes an HMB interface 203 for interfacing with HMB 204 across host memory bus 205. HMB 204 is stored in host DRAM 206);
a second set of physical storage locations on the HMB (eg., 0071 - storage device further includes an adaptive host memory buffer (HMB) caching module for using the hints to identify portions of the table to cache in the HMB and for caching the identified portions in the HMB).
Hahn does not disclose, but Katagiri discloses
generating a virtual address space associated with the memory sub-system, the virtual address space comprising a first set of physical storage locations on one or more storage devices of the memory sub-system and (eg., [0055] The internal buffer 62 is a storage area in which user data is temporarily stored. The internal buffer 62 temporarily stores data associated with a write command received from the host 2. The internal buffer 62 temporarily stores data that has been read from the NAND memory 5 based on a read command received from the host 2.; 0054 - The LUT cache 61 has a plurality of entries each of which can store one or more pieces of address translation information. The address translation information stored in each of the entries includes, for example, a plurality of physical addresses that correspond to consecutive logical addresses, respectively. The LUT cache 61 stores a dirty flag for each piece of address translation information stored in the LUT cache 61.)
performing one or more memory operations on user data received from the host using the virtual address space and a set of memory components of the memory sub-system (eg., 0035 - The write command includes information specifying a logical address to which the data stored in the memory 22 to be written, a data pointer indicating the storage area in which the data is stored in the memory 22; 0096 - controller 4 reads the management data of the SSD 3 from each read command data buffer).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, providing the benefit of SSDs require a technique which can effectively use a memory of a host (see Katagiri, 0005) management of data stored in the NAND memory 5 and management of blocks included in the NAND memory 5 as a flash translation layer (FTL). The management of data stored in the NAND memory 5 includes, for example, management of mapping information which is information indicating correspondences between each logical address and each physical address of the NAND memory 5. (0044).
Claims 6, 7, 9, 10 are rejected under 35 U.S.C. 103 as being unpatentable over Hahn (US 20160246726) and in view of Katagiri (US 20230094946) and further in view of Benisty (US 20240143508)(hereinafter “Benisty508”)
Claim 6. Hahn in view of Katagiri does not disclose, but Benisty508 discloses
the operations comprising: caching, on a portion of the second set of physical storage locations on the HMB, the user data that is also cached on the second portion of the first set of physical storage locations (eg., 0024 - the HMB 150 may be used by the controller 108 to store data that would normally be stored in a volatile memory 112, a buffer 116, an internal memory of the controller 108, such as static random access memory (SRAM), and the like. In examples where the data storage device 106 does not include a DRAM (i.e., optional DRAM 118), the controller 108 may utilize the HMB 150 as the DRAM of the data storage device 106.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, with Benisty508, providing the benefit of The controller 108 may include an optional second volatile memory 120. The optional second volatile memory 120 may be similar to the volatile memory 112. (see Benisty508, 0033)
Claim 7. Hahn discloses the operations comprising: transmitting, by the at least one processing device, an instruction to the temporary storage device to store the user data on the second set of physical storage locations on the HMB (eg., 0017 - secondary FTL cache 104 may store FTL data that is likely to be accessed next).
Claim 9. Hahn discloses the operations comprising: receiving a read request from the host associated with the set of logical addresses;determining that the set of logical addresses corresponds to the user data that has been cached on the portion of the second set of physical storage locations on the HMB; and retrieving the user data from the portion of the second set of physical storage locations on the HMB in response to receiving the read request from the host (eg., 0028 - if the FTL data required for memory accesses related to a particular read are already in the primary or secondary FTL cache, then it is not necessary to evict data from one of the caches; 0030, 0034 Fig. 4B - In step 502, the primary and secondary FTL caches are populated with frequently read data, such as data that is frequently read on boot up).
Claim 10. Hahn does not disclose, but Katagiri discloses
the operations comprising: transmitting, to the host, the user data that has been retrieved by the at least one processing device from the portion of the second set of physical storage locations on the HMB (eg., [0145] The controller 4 reads the read target data from a storage location of the NAND memory 5 indicated by the physical address obtained in step S105 (step S106). The controller 4 stores the read target data which has been read in the internal buffer 62. [0146] The controller 4 executes data transfer for transferring the read target data which has been read in step S106 to the read command data buffer 223 of the memory 22 (step S107). The details of the data transfer are described later with reference to FIG. 15. [0147] The controller 4 transmits a completion response of the read command to the host 2 (step S108).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, providing the benefit of SSDs require a technique which can effectively use a memory of a host (see Katagiri, 0005) management of data stored in the NAND memory 5 and management of blocks included in the NAND memory 5 as a flash translation layer (FTL). The management of data stored in the NAND memory 5 includes, for example, management of mapping information which is information indicating correspondences between each logical address and each physical address of the NAND memory 5. (0044).
Claims 14 is rejected under 35 U.S.C. 103 as being unpatentable over Hahn (US 20160246726) and in view of Katagiri (US 20230094946) and further in view of Zamir (US 20230418514)
Claim 14. Hahn in view of Katagiri does not disclose, but Zamir discloses
the operations comprising: prioritizing storage of a logical to physical address mapping on the first set of physical storage locations on one or more storage devices of the memory sub- system (eg., 0039 - segments having a higher caching priority may be stored in an internal memory of the controller 302, such as SRAM, and segments having a lower caching priority may be stored in HMB or the NVM 306. For example, the first K2P segment 308a may be stored in an internal memory of the controller 302, such as SRAM, and the second and third K2P segments 308b, 308c may be stored in the NVM 306 or the HMB 150. In some examples, the controller 302 may utilize little to no prefetching for the lower caching priority segments when a related KV pair data is requested to be read.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, with Zamir, providing the benefit of solid state drives (SSDs), and, more specifically, optimizing storage of key-to-physical (K2P) tables in key value (KV) data storage devices (see Zamir, 0001) need in the art to manage and optimize the management of the K2P table (0004).
Claims 15, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Hahn (US 20160246726) and in view of Katagiri (US 20230094946) and Zamir (cited above) and further in view of Benisty (US 20180018101)(hereinafter “Benisty101”)
Claim 15. Hahn in view of Katagiri and Zamir does not disclose, but Benisty101 discloses
the operations comprising:determining that the one or more storage devices of the memory sub-system are full (eg., 0046 Table 2 - data is then cached in the DRAM on the storage device, aggregated until enough data is present to write a full page, )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, and Zamir with Benisty101, providing the benefit of write classification and aggregation using the host memory buffer. The host memory buffer allows the controller of a storage device to use a designated portion of host memory for storing storage device data. The designated memory resources allocated on the host are for the exclusive use of the storage device controller (see Benisty101).
Claim 16. Hahn in view of Katagiri and Zamir does not disclose, but Benisty101 discloses
the operations comprising:in response to determining that the one or more storage devices of the memory sub-system are full, programming additional memory management data on the second set of physical storage locations on the HMB (eg., 0046 - then the data is written from the coupled DRAM to the flash memory in a single write operation without traversing the PCIe bus. For write caching using the HMB, data is initially read over the PCIe bus to the storage device, written from the storage device over the PCIe bus to the HMB, and read from the HMB to the storage device for writing to flash memory. Thus, for HMB caching, the data traverses the PCIe bus three times versus once for coupled DRAM caching).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the HMB caching as disclosed by Hahn with Katagiri, and Zamir with Benisty101, providing the benefit of write classification and aggregation using the host memory buffer. The host memory buffer allows the controller of a storage device to use a designated portion of host memory for storing storage device data. The designated memory resources allocated on the host are for the exclusive use of the storage device controller (see Benisty101).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM SAIN whose telephone number is (571)270-3555. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached at 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GAUTAM SAIN/Primary Examiner, Art Unit 2135