DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed December 22, 2025 has been entered. Claims 6-11 remain pending in the application. Claims 1-5 have been cancelled. Applicant's amendments to the claims have overcome the 35 U.S.C. 112(a), 35 U.S.C. 112(b), and 35 U.S.C. 103 rejections previously set forth in the Non-Final Office Action mailed September 25, 2025.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6-11 are rejected under 35 U.S.C. 103 as being unpatentable over Huangfu et al. (US 2023/0281128), Jeon et al. (US 2024/0061618), Shim et al. (US 2013/0179752), Kulkarni et al. (US 10,049,035), Jin et al. (US 2019/0324644), and Cho et al. (US 2011/0276777).
Regarding claim 6, Huangfu et al. disclose:
A method of implementing a self-managed dynamic random-access memory (DRAM) module, comprising:
a plurality of DRAM chips (FIG. 3A CXL DIMM module 310; [0032] CXL memory modules 310 may be Dual In-Line Memory Modules (DIMMs), and may be used as DRAM; [0090] local DRAM chips in the CXLG-DIMM); and
providing a controller ([0078] DIMM-side Memory Controller (MC))…configured to store either…data using channel/bank interleaving among a plurality of channels or…data sequentially along a single channel as sequential data ([0110] The default address mapping scheme may interleave data in continuous address between different channels and ranks to fully utilize available memory bandwidth from different channels and ranks for the host. For the CXL-DIMMs, the coarse-grained NDP aware address mapping may be used. Instead of interleaving data, the coarse-grained NDP aware address mapping may aggregate data within each rank locally to enable efficient local memory access and reduce data movement. For the CXLG-DIMMs, if multiple continuous fine-grained memory accesses are needed to access the target data, e.g., DNA seeding, a fine-grained and coalesced address mapping may be used. The fine-grained and coalesced address mapping may support fine-grained memory access and may aggregate data within each DRAM chip to better leverage locality. On the other hand, if a single fine-grained memory access is more than enough to access the target data, e.g., k-mer counting, the fine-grained and distributed address mapping may be used. The coarse-grained and distributed address mapping may also support fine-grained memory access, while it distributes data to different DRAM chips as much as possible to better leverage chip-level bandwidth and parallelism);
Huangfu et al. do not appear to explicitly teach “a controller chip…allocating a sequential region of memory space in the DRAM chips to store sequential data; evaluating, in the background, an uncompressed data chunk stored using channel/bank interleaving to determine whether the uncompressed data chunk has been sequentially accessed over a time window; in response to the uncompressed data chunk being sequentially accessed over the time window, compressing the uncompressed data chunk to form a compressed data chunk; and writing the compressed data chunk to the sequential region.” However, Jeon et al. disclose:
a controller chip ([0027] the controller 300…may be implemented as an individual chip)
Huangfu et al. and Jeon et al. are analogous art because Huangfu et al. a cache- coherent interconnect memory system and Jeon et al. teach controller chips.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Huangfu et al. and Jeon et al. before him/her, to modify the teachings of Huangfu et al. with the Jeon et al. teachings of controller chips because implementing a controller chip would have been an obvious variation of the example embodiment described by Huangfu et al.
Huangfu et al. and Jeon et al. do not appear to explicitly teach “allocating a sequential region of memory space in the DRAM chips to store sequential data; evaluating, in the background, an uncompressed data chunk stored using channel/bank interleaving to determine whether the uncompressed data chunk has been sequentially accessed over a time window; in response to the uncompressed data chunk being sequentially accessed over the time window, compressing the uncompressed data chunk to form a compressed data chunk; and writing the compressed data chunk to the sequential region.” However, Shim et al. disclose:
allocating a sequential region of memory space…to store sequential data ([0123] sequentially receive a first write request and a second write request; [0129] After sequentially collected at the RAM 1340, the first and second compressed data may be programmed into a cell array 1410. For example, as illustrated in FIG. 6, the first and second compressed data may be programmed into a first page of a memory block 1420);
…writing the compressed data chunk to the sequential region (FIG. 6; [0129] After sequentially collected at the RAM 1340, the first and second compressed data may be programmed into a cell array 1410. For example, as illustrated in FIG. 6, the first and second compressed data may be programmed into a first page of a memory block 1420).
Huangfu et al., Jeon et al., and Shim et al. are analogous art because Huangfu et al. a cache- coherent interconnect memory system; Jeon et al. teach controller chips; and Shim et al. teach processing compressing data in a memory system.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Huangfu et al., Jeon et al., and Shim et al. before him/her, to modify the combined teachings of Huangfu et al. and Jeon et al. with the Shim et al. teachings of compressing sequential data because compressing data can reduce the size of the of read and write data and improve the input/output performance of the memory system.
Huangfu et al., Jeon et al., and Shim et al. do not appear to explicitly teach “in the DRAM chips…evaluating, in the background, an uncompressed data chunk stored using channel/bank interleaving to determine whether the uncompressed data chunk has been sequentially accessed over a time window; in response to the uncompressed data chunk being sequentially accessed over the time window, compressing the uncompressed data chunk to form a compressed data chunk.” However, Kulkarni et al. disclose:
…in the DRAM chips (Col 7, lines 11-13: Sequential allocation of DRAM is performed in a manner similar to that described above for the flash memory);
Huangfu et al., Jeon et al., Shim et al., and Kulkarni et al. are analogous art because Huangfu et al. a cache-coherent interconnect memory system; Jeon et al. teach controller chips; Shim et al. teach processing compressing data in a memory system; and Kulkarni et al. teach memory management.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the combined teachings of Huangfu et al., Jeon et al., Shim et al., and Kulkarni et al. before him/her, to modify the combined teachings of Huangfu et al., Jeon et al., and Shim et al. with the Kulkarni et al. teachings of allocating a sequential region in a DRAM because such a modification would have amounted to little more than combining "familiar elements according to known methods" and would have been obvious because it would have done "no more than yield predictable results." (MPEP 2143 I.A.) Sequential allocation of DRAM is a known technique that would have yielded the predictable result of allocating a region of the DRAM for storage of data.
Huangfu et al., Jeon et al., Shim et al., and Kulkarni et al. do not appear to explicitly teach “evaluating, in the background, an uncompressed data chunk stored using channel/bank interleaving to determine whether the uncompressed data chunk has been sequentially accessed over a time window; in response to the uncompressed data chunk being sequentially accessed over the time window, compressing the uncompressed data chunk to form a compressed data chunk.” However, Jin et al. disclose:
evaluating…an uncompressed data chunk…to determine whether the uncompressed data chunk has been sequentially accessed over a time window (FIG. 10 S1003 Determine pattern map data stored in each map chunk included in mapping table; [0103] the controller 130 determines whether the map data stored in each of the map chunk is map data corresponding to random data, map data corresponding to sequential data);
in response to the uncompressed data chunk being sequentially accessed over the time window, compressing the uncompressed data chunk to form a compressed data chunk (FIG. 10 S1007 Selectively preform compression depending on compression performance determination result; [0104] At step S1005, the controller 130 determines whether to perform compression, depending on the pattern determination result of the step S1003 for the map data stored in each map chunk; [0105] At step S1007, the controller 130 performs compression to different chunk units by applying different compression rates depending on the patterns of the map data stored in the map chunks, according to the results of determining whether to perform compression at the step S1005. For instance, the controller 130 may apply a compression rate of 0% in the case where the pattern of the map data stored in a map chunk is random data. The controller 130 may compress a map chunk size by applying a compression rate of 50% in the case where the pattern of the map data stored in a map chunk is sequential data); and
Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., and Jin et al. are analogous art because Huangfu et al. a cache-coherent interconnect memory system; Jeon et al. teach controller chips; Shim et al. teach processing compressing data in a memory system; Kulkarni et al. teach memory management; and Jin et al. teach compressing data in a memory system.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., and Jin et al. before him/her, to modify the combined teachings of Huangfu et al., Jeon et al., Shim et al., and Kulkarni et al. with the Jin et al. teachings of compressing data stored in a memory device because compressing data determined to be sequential that is stored in the memory would prevent degradation of operational performance of the memory system.
Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., and Jin et al. do not appear to explicitly teach “evaluating, in the background.” However, Cho et al. disclose:
evaluating, in the background ([0053] Data stored in the assigned blocks is then compressed by compression block 1300 in a background operation BGO of data storage device 1000, and resulting compressed data is stored in other assigned blocks (e.g., data blocks) of storage medium 1100. Background operation BGO can include, for instance, a merge operation, an idle operation, a copy-back operation, etc.; [0054] However, background operation BGO can take other forms. In addition, operations such as data compression can be performed in an operation accompanying data transfer between memory blocks within data storage device 1000 without a host request)…
Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., Jin et al., and Cho et al. are analogous art because Huangfu et al. a cache-coherent interconnect memory system; Jeon et al. teach controller chips; Shim et al. teach processing compressing data in a memory system; Kulkarni et al. teach memory management; Jin et al. teach compressing data in a memory system; and Cho et al. teach background compression operations.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., Jin et al., and Cho et al. before him/her, to modify the combined teachings of Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., and Jin et al. with the Cho et al. teachings of background operations because performing the evaluation in the background can improve performance of the DRAM (Cho et al. [0039]).
Huangfu et al. disclose that channel interleaving is the default address mapping scheme and that this mapping scheme supports single memory accesses. Huangfu et al. also disclose that multiple continuous memory accesses should be aggregated locally within each rank to enable efficient local memory access and reduce data movement, as discussed above. Jin et al. disclose compressing sequentially accessed data at a rate of 50% and compressing randomly accessed data at a rate of 0%, as discussed above. The Huangfu et al. single memory accesses correspond to the Jeon et al. uncompressed, random accesses and the Huangfu et al. continuous memory accesses correspond to the Jin et al. compressed, sequential accesses. Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., Jin et al., and Cho et al. do not appear to explicitly teach the uncompressed data is stored using channel/bank interleaving and the compressed data is stored along a single channel. However, based on the teachings of Huangfu et al. and Jin et al., it would be obvious to store the uncompressed, random accesses using channel/bank interleaving and store the compressed, sequential accesses along a single channel because storing the sequential, compress data locally would enable efficient local memory access and reduce data movement.
Regarding claim 7, the combination of Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., Jin et al., and Cho et al. further disclose:
The method of claim 6, further comprising:
evaluating in the background (as taught by Cho et al. in claim 6) a compressed data chunk stored as sequential data to determine whether the compressed data chunk has been sequentially accessed over the time window (Jin et al. further disclose: FIG. 10 S1003 Determine pattern map data stored in each map chunk included in mapping table; [0103] the controller 130 determines whether the map data stored in each of the map chunk is map data corresponding to random data, map data corresponding to sequential data); and
in response to the compressed data chunk not being sequentially accessed over the time window, reading and decompressing the compressed data chunk to generate a decompressed data chunk (Jin et al. further disclose: FIG. 10 S1007 Selectively preform compression depending on compression performance determination result; [0104] At step S1005, the controller 130 determines whether to perform compression, depending on the pattern determination result of the step S1003 for the map data stored in each map chunk; [0105] At step S1007, the controller 130 performs compression to different chunk units by applying different compression rates depending on the patterns of the map data stored in the map chunks, according to the results of determining whether to perform compression at the step S1005. For instance, the controller 130 may apply a compression rate of 0% in the case where the pattern of the map data stored in a map chunk is random data. The controller 130 may compress a map chunk size by applying a compression rate of 50% in the case where the pattern of the map data stored in a map chunk is sequential data) and writing the decompressed data chunk using channel/bank interleaving (Huangfu et al. further disclose: [0110] The default address mapping scheme may interleave data in continuous address between different channels and ranks to fully utilize available memory bandwidth from different channels and ranks for the host….if a single fine-grained memory access is more than enough to access the target data, e.g., k-mer counting, the fine-grained and distributed address mapping may be used. The coarse-grained and distributed address mapping may also support fine-grained memory access, while it distributes data to different DRAM chips as much as possible to better leverage chip-level bandwidth and parallelism)).
Regarding claim 8, Huangfu et al. further disclose:
The method of claim 7, wherein the channel/bank interleaving is disabled prior to writing the compressed data chunk so that data is stored consecutively along a single channel of DRAM chips ([0110] The default address mapping scheme may interleave data in continuous address between different channels and ranks to fully utilize available memory bandwidth from different channels and ranks for the host. For the CXL-DIMMs, the coarse-grained NDP aware address mapping may be used. Instead of interleaving data, the coarse-grained NDP aware address mapping may aggregate data within each rank locally to enable efficient local memory access and reduce data movement).
Regarding claim 9, Shim et al. further disclose:
The method of claim 7, further comprising storing metadata to record whether data chunks are is stored either as compressed data chunks or uncompressed data chunks ([0133] FIG. 8, compression information CI marked by `Y` may indicate that data is compressed data, and compression information CI marked by `N` may indicate that data is uncompressed data).
Regarding claim 10, Sim et al. further disclose:
The method of claim 6, wherein evaluating the uncompressed data chunk includes first loading the uncompressed data chunk into a write buffer (FIG. 6 Source Data 1; [0123] A storage device 1200 may sequentially receive a first write request and a second write request from a host 1100; [0125] The first source data may be temporarily stored in the RAM 1340).
Regarding claim 11, Huangfu et al. disclose:
A method of implementing a self-managed dynamic random-access memory (DRAM) module, comprising:
providing a plurality of DRAM chips (FIG. 3A CXL DIMM module 310; [0032] CXL memory modules 310 may be Dual In-Line Memory Modules (DIMMs), and may be used as DRAM; [0090] local DRAM chips in the CXLG-DIMM);
providing a controller ([0078] DIMM-side Memory Controller (MC))…configured to store either…data using channel/bank interleaving among a plurality of channels or…data sequentially along a single channel as sequential data ([0110] The default address mapping scheme may interleave data in continuous address between different channels and ranks to fully utilize available memory bandwidth from different channels and ranks for the host. For the CXL-DIMMs, the coarse-grained NDP aware address mapping may be used. Instead of interleaving data, the coarse-grained NDP aware address mapping may aggregate data within each rank locally to enable efficient local memory access and reduce data movement. For the CXLG-DIMMs, if multiple continuous fine-grained memory accesses are needed to access the target data, e.g., DNA seeding, a fine-grained and coalesced address mapping may be used. The fine-grained and coalesced address mapping may support fine-grained memory access and may aggregate data within each DRAM chip to better leverage locality. On the other hand, if a single fine-grained memory access is more than enough to access the target data, e.g., k-mer counting, the fine-grained and distributed address mapping may be used. The coarse-grained and distributed address mapping may also support fine-grained memory access, while it distributes data to different DRAM chips as much as possible to better leverage chip-level bandwidth and parallelism);
Huangfu et al. do not appear to explicitly teach “a controller chip…allocating a sequential region of memory space in the DRAM chips to store sequential data; evaluating, in the background, a compressed data chunk stored sequentially to determine whether the compressed data chunk has been sequentially accessed over a time window; in response to the compressed data chunk not being sequentially accessed over the time window, uncompressing the compressed data chunk to form an uncompressed data chunk; and writing the uncompressed data chunk using channel/bank interleaving among the plurality of channels.” However, Jeon et al. disclose:
a controller chip ([0027] the controller 300…may be implemented as an individual chip)
The motivation for combining is based on the same rational presented for rejection of independent claim 6.
Huangfu et al. do not appear to explicitly teach “allocating a sequential region of memory space in the DRAM chips to store sequential data; evaluating, in the background, a compressed data chunk stored sequentially to determine whether the compressed data chunk has been sequentially accessed over a time window; in response to the compressed data chunk not being sequentially accessed over the time window, uncompressing the compressed data chunk to form an uncompressed data chunk; and writing the uncompressed data chunk using channel/bank interleaving among the plurality of channels.” However, Shim et al. disclose:
allocating a sequential region of memory space…to store sequential data ([0123] sequentially receive a first write request and a second write request; [0129] After sequentially collected at the RAM 1340, the first and second compressed data may be programmed into a cell array 1410. For example, as illustrated in FIG. 6, the first and second compressed data may be programmed into a first page of a memory block 1420);
The motivation for combining is based on the same rational presented for rejection of independent claim 6.
Huangfu et al., Jeon et al., and Shim et al. do not appear to explicitly teach “in the DRAM chips…evaluating, in the background, a compressed data chunk stored sequentially to determine whether the compressed data chunk has been sequentially accessed over a time window; in response to the compressed data chunk not being sequentially accessed over the time window, uncompressing the compressed data chunk to form an uncompressed data chunk; and writing the uncompressed data chunk using channel/bank interleaving among the plurality of channels” However, Kulkarni et al. disclose:
…in the DRAM chips (Col 7, lines 11-13: Sequential allocation of DRAM is performed in a manner similar to that described above for the flash memory);
The motivation for combining is based on the same rational presented for rejection of independent claim 6.
Huangfu et al., Jeon et al., Shim et al., and Kulkarni et al. do not appear to explicitly teach “evaluating, in the background, a compressed data chunk stored sequentially to determine whether the compressed data chunk has been sequentially accessed over a time window; in response to the compressed data chunk not being sequentially accessed over the time window, uncompressing the compressed data chunk to form an uncompressed data chunk; and writing the uncompressed data chunk using channel/bank interleaving among the plurality of channels.” However, Jin et al. disclose:
evaluating…a compressed data chunk stored sequentially to determine whether the compressed data chunk has been sequentially accessed over a time window (FIG. 10 S1003 Determine pattern map data stored in each map chunk included in mapping table; [0103] the controller 130 determines whether the map data stored in each of the map chunk is map data corresponding to random data, map data corresponding to sequential data);
in response to the compressed data chunk not being sequentially accessed over the time window, uncompressing the compressed data chunk to form an uncompressed data chunk (FIG. 10 S1007 Selectively preform compression depending on compression performance determination result; [0104] At step S1005, the controller 130 determines whether to perform compression, depending on the pattern determination result of the step S1003 for the map data stored in each map chunk; [0105] At step S1007, the controller 130 performs compression to different chunk units by applying different compression rates depending on the patterns of the map data stored in the map chunks, according to the results of determining whether to perform compression at the step S1005. For instance, the controller 130 may apply a compression rate of 0% in the case where the pattern of the map data stored in a map chunk is random data. The controller 130 may compress a map chunk size by applying a compression rate of 50% in the case where the pattern of the map data stored in a map chunk is sequential data); and
The motivation for combining is based on the same rational presented for rejection of independent claim 6.
Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., and Jin et al. do not appear to explicitly teach “evaluating, in the background…writing the uncompressed data chunk using channel/bank interleaving among the plurality of channels.” However, Cho et al. disclose:
evaluating, in the background ([0053] Data stored in the assigned blocks is then compressed by compression block 1300 in a background operation BGO of data storage device 1000, and resulting compressed data is stored in other assigned blocks (e.g., data blocks) of storage medium 1100. Background operation BGO can include, for instance, a merge operation, an idle operation, a copy-back operation, etc.; [0054] However, background operation BGO can take other forms. In addition, operations such as data compression can be performed in an operation accompanying data transfer between memory blocks within data storage device 1000 without a host request)…
The motivation for combining is based on the same rational presented for rejection of independent claim 6.
Huangfu et al. disclose that channel interleaving is the default address mapping scheme and that this mapping scheme supports single memory accesses. Huangfu et al. also disclose that multiple continuous memory accesses should be aggregated locally within each rank to enable efficient local memory access and reduce data movement, as discussed above. Jin et al. disclose compressing sequentially accessed data at a rate of 50% and compressing randomly accessed data at a rate of 0%, as discussed above. The Huangfu et al. single memory accesses correspond to the Jeon et al. uncompressed, random accesses and the Huangfu et al. continuous memory accesses correspond to the Jin et al. compressed, sequential accesses. Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., Jin et al., and Cho et al. do not appear to explicitly teach the uncompressed data is stored using channel/bank interleaving and the compressed data is stored along a single channel. However, based on the teachings of Huangfu et al. and Jin et al., it would be obvious to store the uncompressed, random accesses using channel/bank interleaving and store the compressed, sequential accesses along a single channel because storing the uncompressed, random accesses using channel/bank interleaving would better leverage chip-level bandwidth and parallelism. Therefore, the combination of Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., Jin et al., and Cho et al. disclose “writing the uncompressed data chunk using channel/bank interleaving among the plurality of channels.”
Response to Arguments
Applicant’s arguments, filed December 22, 2025, with respect to the rejection of claims have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Huangfu et al., Jeon et al., Shim et al., Kulkarni et al., Jin et al., and Cho et al. based on applicant’s amendment to the claims.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRACY A WARREN whose telephone number is (571)270-7288. The examiner can normally be reached M-Th 7:30am-5pm, Alternate F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P. Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TRACY A WARREN/Primary Examiner, Art Unit 2137