Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
Information disclosure statement (IDS) submitted on 0I/O3/2025, 01/14/2025, 08/21/2025, 09/12/2025, 09/25/2025 and 01/26/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Non-Statutory Type Double Patenting
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper time wise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Langi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(1)(1) - 706.02(1)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patenVpatents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIN25, or PTO/AIN26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-l.isp
"A later patent claim is not patentably distinct from an earlier patent claim if the later claim is obvious over, or anticipated by the earlier claim. ln re Longi-759 F.2d at 896, 225 USPQ at 651 (affirming a holding of obviousness-type double patenting because the claims at issue were obvious over claims in four prior art patents); In re Berg, 140 F.3d at 1437, 46 USPQ2d at 1233 (Fed. Cir. 1998) (affirming a holding of obviousness-type double patenting where a patent application claim to a genus is anticipated by a patent claim to a species within that genus). " ELI LILLY AND COMPANY v BARR LABORATORIES, INC., United States Court of Appeals for the Federal Circuit, ON PETITION FOR REHEARING EN BANC (DECIDED: May 30, 2001).
Regarding instant method claim 1, it matches in contents to the subset of reference method claim 1 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 2, it matches in contents to the subset of reference method claim 2 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 3, it matches in contents to the subset of reference method claim 3 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 4, it matches in contents to the subset of reference method claim 4 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 5, it matches in contents to the subset of reference method claim 6 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 6, it matches in contents to the subset of reference method claim 7 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 7, it matches in contents to the subset of reference method claim 19 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 8, it matches in contents to the subset of reference method claim 20 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 13, it matches in contents to the subset of reference method claim 19 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 14, it matches in contents to the subset of reference method claim 20 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 19, it matches in contents to the subset of reference method claim 19 (app# 17/731,038: patent no: 12229055).
Regarding instant method claim 20, it matches in contents to the subset of reference method claim 20 (app# 17/731,038: patent no: 12229055).
Instant claim: 19/009,685
17/731,038: patent no: 12229055
Regarding claim 1
Regarding claim 1
A method comprising:
A method comprising:
receiving a first request to write first data at a first virtual location;
receiving from a user a first request to write first data at a first virtual location;
writing the first data to a first physical location on a persistent storage system;
writing the first data to a first physical location on a persistent storage system;
recording a first mapping from the first virtual location to the first physical location;
recording a first mapping from the first virtual location to the first physical location;
receiving a second request to write second data at the first virtual location;
receiving from the user a second request to write second data at the first virtual location;
writing the second data to a second physical location on the persistent storage system indicated by a head pointer, the second physical location being different from the first physical location, wherein each block on the persistent storage system is written to once, before any block of the persistent storage system is written to a second time; and
writing the second data to a second physical location on the persistent storage system, the second physical location corresponding to a next free block in a sequence of blocks on the persistent storage system, and being different from the first physical location;
replacing the first mapping with a second mapping from the first virtual location to the second physical location.
replacing the first mapping with a second mapping from the first virtual location to the second physical location;
and marking the first physical location as dirty.
Regarding claim 2
Regarding claim 2
The method of claim 1, wherein the first virtual location and the first physical location are not correlated with one another.
The method of claim 1, wherein the first virtual location and the first physical location are not correlated with one another
Regarding claim 3
Regarding claim 3
The method of claim 1, wherein the first physical location is determined by a head counter.
The method of claim 1, wherein the first physical location is determined by a head counter.
Regarding claim 4
Regarding claim 4
The method of claim 3, further comprising after writing the data to the first physical location, updating the head counter.
The method of claim 3, further comprising writing the data to the first physical location, updating the head counter.
Regarding claim 5
(Cancelled)
Regarding claim 5
Regarding claim 6
The method of claim 1, wherein writing the second data to the second physical location occurs without performing a read from the first physical location.
The method of claim 1, wherein writing the second data to the second physical location occurs without performing a read from the first physical location.
Regarding claim 6
Regarding claim 7
The method of claim 1, further comprising:
The method of claim 1, further comprising:
calculating a checksum for the first data; and
calculating a checksum for the first data;
recording the checksum in metadata associated with the persistent storage system.
and recording the checksum in metadata associated with the persistent storage system
Regarding claim 7
Regarding claim 19
A system comprising:
A system comprising:
one or more processors; and
one or more processors;
a memory;
and a memory;
wherein the memory comprises instructions which, when executed by the one or more processors, configure the one or more processors to perform the method of claim 1.
wherein the memory comprises instructions which, when executed by the one or more processors, configure the one or more processors to perform the method of claim 1
Regarding claim 8
Regarding claim 20
One or more non-transitory computer readable media comprising instructions which, when executed by one or more processors, cause the one or more processors to perform the method of claim 1.
One or more non-transitory computer readable media comprising instructions which, when executed by one or more processors, cause the one or more processors to perform the method of claim 1.
Regarding claim 13
Regarding claim 19
A system comprising:
A system comprising:
one or more processors; and
one or more processors;
a memory;
and a memory;
wherein the memory comprises instructions which, when executed by the one or more processors, configure the one or more processors to perform the method of claim 9.
wherein the memory comprises instructions which, when executed by the one or more processors, configure the one or more processors to perform the method of claim 1
Regarding claim 14
Regarding claim 20
One or more non-transitory computer readable media comprising instructions which, when executed by one or more processors, cause the one or more processors to perform the method of claim 9.
One or more non-transitory computer readable media comprising instructions which, when executed by one or more processors, cause the one or more processors to perform the method of claim 1.
Regarding claim 19
Regarding claim 19
A system comprising:
A system comprising:
one or more processors; and
one or more processors;
a memory;
and a memory;
wherein the memory comprises instructions which, when executed by the one or more processors, configure the one or more processors to perform the method of claim 15.
wherein the memory comprises instructions which, when executed by the one or more processors, configure the one or more processors to perform the method of claim 1
Regarding claim 20
Regarding claim 20
One or more non-transitory computer readable media comprising instructions which, when executed by one or more processors, cause the one or more processors to perform the method of claim 15.
One or more non-transitory computer readable media comprising instructions which, when executed by one or more processors, cause the one or more processors to perform the method of claim 1.
Regarding instant method claim 9, 10, 11 and 12 they match in contents to the subset of the combination of reference method claim 1 and 12 (app# 16/544,605, patent no: 11347653). Instant claim mentions stripe 0…N and drives/storage devices 0 … M-1 and reference claim 1 uses plurality of stripes and plurality of storage devices.
Regarding instant method claim 15 and 16 they match in contents of the combination of reference method claim 1 and 10 (app# 16/544,605, patent no: 11347653).
Regarding instant method claim 17 and 18 they match in contents of the reference method claim 12 (app# 16/544,605, patent no: 11347653).
Wording in the instant claim and reference claim are different and some instant claim element provides more detail implementation steps but they contain the same teachings and no new inventive concept is introduced in the instant claim.
Instant claim: 19/009,685
app# 16/544,605, patent no: 11347653
Regarding claim 9
Regarding claim 1
A method comprising writing a plurality of stripes i = 0 ...N to a plurality of drives 0 ... M-1, wherein each stripe i includes a plurality of data blocks and at least one parity block, and wherein a starting data block of stripe i is written to drive i modulo M.
a method comprising: receiving a request to write data at a virtual location;
Regarding claim 11
The method of claim 9, wherein the plurality of drives comprises a plurality of persistent storage devices.
writing the data to a physical location on a persistent storage device of a plurality of persistent storage devices, wherein the physical location corresponds to a block within a stripe, the stripe comprising a plurality of blocks, each block being a physical location on each of the plurality of persistent storage devices;
after completing the stripe by writing received data to the physical location of each of the plurality of persistent storage devices , updating a head counter; and
Regarding claim 10
recording a mapping from the virtual location to the physical location;
The method of claim 9, wherein a parity block of stripe i is written to a first one of the drives and a parity block of stripe i+1 is written to a second one of the drives different than the first one of the drives.
wherein the physical location corresponds to a next free block in a sequence of blocks on the persistent storage device. .
Regarding claim 12
Regarding claim 12
The method of claim 9, wherein stripe i includes the data or parity blocks at physical location i on drives 0 ... M-1
the method of claim 1, further comprising:
determining that a predetermined number of blocks within the stripe have been written;
calculating parity corresponding to the data written to the predetermined number of blocks within the stripe; and
writing the parity data in one or more blocks within the stripe.
Regarding claim 15
Regarding claim 1
A method comprising:
a method comprising:
writing a plurality of stripes i = 0 ...N to a plurality of drives 0 ... M-1, wherein each stripe includes a plurality of data blocks and at least one parity block, and wherein a starting data block of stripe i is written to drive i modulo M;
receiving a request to write data at a virtual location;
writing the data to a physical location on a persistent storage device of a plurality of persistent storage devices, wherein the physical location corresponds to a block within a stripe, the stripe comprising a plurality of blocks, each block being a physical location on each of the plurality of persistent storage devices;
after completing the stripe by writing received data to the physical location of each of the plurality of persistent storage devices , updating a head counter; and
recording a mapping from the virtual location to the physical location;
wherein the physical location corresponds to a next free block in a sequence of blocks on the persistent storage device.
Regarding claim 10
marking one or more data blocks in the plurality of stripes as dirty, wherein a tail counter identifies a first physical location on a first one of the drives that stores an oldest non-dirty data block, and wherein a head counter identifies a second physical location on a second one of the drives with a next free block;
the method of claim 8, further comprising, in response to determining that a garbage collection condition is met:
identifying, on the first drive, a first data block stored at the first physical location indicated by the tail pointer;
determining a block at the tail of the sequence of blocks;
storing, on the second drive, the first data block at the second physical location indicated by the head pointer; and
writing the data at the block to the head of the sequence of blocks; and
marking the first physical location on the first drive storing the first data block as dirty.
updating the mapping based on the writing.
Regarding claim 16
The method of claim 15, further comprising following the storing of the first data block on the second drive, updating a mapping so that a virtual location which previously mapped to the first physical location on the first drive now maps to the second physical location on the second drive.
Regarding claim 17
Regarding claim 12
The method of claim 15, further comprising following the storing of the first data block on the second drive, storing, on the second drive, a parity block at the second physical location indicated by the head pointer.
the method of claim 1, further comprising:
determining that a predetermined number of blocks within the stripe have been written;
Regarding claim 18
calculating parity corresponding to the data written to the predetermined number of blocks within the stripe; and
The method of claim 15, wherein a parity block of stripe i is written to a first one of the drives and a parity block of stripe i+1 is written to a second one of the drives different than the first one of the drives.
writing the parity data in one or more blocks within the stripe.
Potential Allowable Subject Matter
Claims 1-20 are currently not rejected in view of prior art on the grounds of 35 U.S.C 102/103, and could become allowable subject matter if the double patenting rejection is overcome.
The following is an Examiner's statement of reasons for potential allowability:
Claim 1 states, ‘A method comprising: receiving a first request to write first data at a first virtual location; writing the first data to a first physical location on a persistent storage system; recording a first mapping from the first virtual location to the first physical location; receiving a second request to write second data at the first virtual location; writing the second data to a second physical location on the persistent storage system indicated by a head pointer, the second physical location being different from the first physical location, wherein each block on the persistent storage system is written to once, before any block of the persistent storage system is written to a second time; and replacing the first mapping with a second mapping from the first virtual location to the second physical location.’
Prior arts teaching different elements of claim 1 is shared below.
Prior art Canepa et al. (US 20140325117 A1) [Canepa]: discloses:
‘receiving a first request to write first data at a first virtual location’ (Canepa [0097] teaches a plurality of host writes are received by an I/O device, such as an SSD. Each of the host writes comprises a respective logical block address (LBA) in a logical block address space and respective data. Logical address/location is similar to virtual address/location. Canepa teaches write request being received from plurality of hosts. Canepa [0003] teaches host write data is termed as user data, i.e. host is termed as user. Hence receiving request from host is similar to receiving request from user and it includes the scenario of one user being the first to issue a write request.)
‘writing the first data to a first physical location on a persistent storage system’ (Canepa: [0097] teaches a plurality of host writes being received by an I/O device, such as an SSD.... For each of the host write command .., the I/O device determines corresponding map information comprising a respective physical location in a non-volatile memory of the I/O device for the respective data"); ‘recording a first mapping from the first virtual location to the first physical location’ (Canepa: [0095] ".. when a host write arrives at the SSD, an LBA of the write is associated via the two-level map with a corresponding second-level map entry, the corresponding second-level map entry is updated with a determined physical location in the non-volatile memory to store data of the host write.". Canepa: [0062] ".., each of the second-level map entries associating a logical block address in a logical block address space with a physical location in a non-volatile memory of the I/O device ..". Updating the map entrv is similar to recording the mapping and logical block address LBA is similar to virtual location);
Prior art Chen et al. (US 9811275 B2)[Chen] discloses: ‘receiving a second request to write second data at the first virtual location’ (Chen: claim 1 teaches a memory system receiving a first write request from a host, the first write request designating a first logical address and receiving a second write request from the host, the second write request also designating the first logical address); ‘writing the second data to a second physical location on the persistent storage system [indicated by a head pointer, the second physical location being different from the first physical location]’ (Chen: claim 1 teaches a memory system receiving a first write request from a host, the first write request designating a first logical address and receiving a second write request from the host, the second write request also designating the first logical address and writing second data in a second storage location in the non-volatile memory in response to the second write request, the second storage location corresponding to a second physical address and is different from the first physical address/location), [wherein each block on the persistent storage system is written to once, before any block of the persistent storage system is written to a second time]; and ‘replacing the first mapping with a second mapping from the first virtual location to the second physical location’ (Chen: claim 1 teaches a memory system comprising a non-volatile memory configured to store first information, the first information being used to manage a correspondence between logical addresses and physical addresses, the physical addresses specifying storage locations in the non-volatile memory and a controller configured to receive a first write request from a host, the first write request designating a first logical address, write first data in a first storage location in the non-volatile memory in response to the first write request, the first storage location corresponding to a first physical address, register the first physical address as a physical address corresponding to the first logical address in the first information, receive a second write request from the host, the second write request designating the first logical address, write second data in a second storage location in the non-volatile memory in response to the second write request, the second storage location corresponding to a second physical address, perform a first process of changing a physical address corresponding to the first logical address in the first information from the first physical address to the second physical address,(similar to replacing first mapping to a second mapping)).
Prior art Hale et al. (US 5502836 A)[Hale] discloses:
‘writing the second data to a second physical location on the persistent storage system indicated by a head pointer, the second physical location being different from the first physical location]’ (Hale: Col 8, line 11-14: teaches the next-free pointer [similar to head pointer] maintains the position of the next-free location to receive a data block, and the next-block pointer maintains the position of the next data block to be relocated. These pointers are advanced after each transfer of a data block, as is further explained herein).
Prior art JING et al. (CN 105141891 A)[Jing] discloses:
wherein each block on the persistent storage system is written to once, before any block of the persistent storage system is written to a second time (Jing: section: invention contents - para 6: teaches setting a second write count to the storage block if the storage block includes a pre-alarm recording buffer space and the storage block is filled once (that means- becomes full) and the second write count is increased by a2. If the pre-warning video buffer space is not included in the storage block and the memory block is filled once, the second write count is increased by a3 where a2 > a3.
So, Jing teaches that a second writing to a storage/memory block happens only when the entire block is written once. However, this is specific to some blocks dealing with some specific data and the teaching does not cover writing second time to a block only after all the blocks in the entire storage is written once or is filled with valid data.
No, known prior arts taken alone or in combination teaches writing to any block of a persistent storage system for a second time until all blocks of the persistent storage system are written for the first time i.e. the persistent storage system is full.
claim 9 states, ‘A method comprising writing a plurality of stripes i = 0 ... N to a plurality of drives 0 ... M-1, wherein each stripe i includes a plurality of data blocks and at least one parity block, and wherein a starting data block of stripe i is written to drive i modulo M.’
Prior arts like Canepa/Hale teaches receiving request and writing data to a physical location on a persistent storage device and recording mapping from virtual to physical locations and using a head counter (next free pointer) that points to the location of the next free block and updating the head counter after writing to a block pointed to by the current head counter.
Prior art Hitz, David et al. (US 20040064474 A1)[Hitz] [0016] teaches teaches a file system operating on top of a RAID subsystem tends to treat the RAID array as a large collection of blocks wherein each block is numbered sequentially across the RAID array. The data blocks of a file are then scattered across the data disks to fill each stripe as fully as possible, thereby placing each data block in a stripe on a different disk. Once N data blocks of a first stripe are allocated to N data disks of the RAID array, remaining data blocks are allocated on subsequent stripes in the same fashion until the entire file is written in the RAID array. Thus, a file is written across the data disks of a RAID system in stripes comprising modulo N data blocks. This has the disadvantage of requiring a single File to be accessed across up to N disks, thereby requiring N disks seeks. Consequently, some prior art file systems attempt to write all the data blocks of a file to a single disk.
No, known prior arts taken alone or in combination teaches writing a plurality of stripes i = 0 ...N to a plurality of drives 0 ... M-1, wherein each stripe i includes a plurality of data blocks and at least one parity block, and wherein a starting data block of stripe i is written to drive i modulo M.
Claim 15 states, ‘A method comprising: writing a plurality of stripes i = 0 ...N to a plurality of drives 0 ... M-1, wherein each stripe includes a plurality of data blocks and at least one parity block, and wherein a starting data block of stripe i is written to drive i modulo M; marking one or more data blocks in the plurality of stripes as dirty, wherein a tail counter identifies a first physical location on a first one of the drives that stores an oldest non-dirty data block, and wherein a head counter identifies a second physical location on a second one of the drives with a next free block; identifying, on the first drive, a first data block stored at the first physical location indicated by the tail pointer; storing, on the second drive, the first data block at the second physical location indicated by the head pointer; and marking the first physical location on the first drive storing the first data block as dirty’.
Prior arts teaching different elements of claim 15 is shared below.
Prior art Canepa et al. (US 20140325117 A1) [Canepa] discloses;
(Canepa: [0003] teaches.. (SSDs) .. use garbage collection (or recycling) to reclaim free space created when a logical block address (LBA) is over-written with new data (rendering a previous physical location associated with that LBA unused)". Marking a physical location unused is similar to marking/recording it as dirty. Spec defines [0030] "... The physical location corresponding to the previous location may be marked as dirty, indicating that it is free to be written over.". This indicates that marking dirty is a way to make it free to be written and Canepa does it by marking it as unused, so that it can be used or written. So, Canepa teaches marking a first physical location unused/dirty if a second physical location is containing the latest data corresponding to the first physical location. Canepa: [0100] The step 971 generally comprises maintaining a first-update-ordered list of the updated second-level map pages. The step 973 generally comprises maintaining a head pointer to an oldest one of the updated second-level map pages. The step 975 generally comprises maintaining a tail pointer to a youngest one of the updated second-level map pages. Canepa: [0095] teaches, when a host write arrives at the SSD, an LBA of the write is associated via the two-level map with a corresponding second-level map entry, the corresponding second-level map entry is updated with a determined physical location in the non-volatile memory to store data of the host write. Canepa: [0062] "..., each of the second-level map entries associating a logical block address in a logical block address space with a physical location in a non- volatile memory of the 1/0 device ... Updating the map entry is similar to recording the mapping and logical block address LBA is similar to virtual location)
Prior art Hale et al. (US 5502836 A)[Hale] discloses:
Hale:Col 8, line 11-14: teaches, the next-free pointer maintains the position of the next-free location to receive a data block, and the next-block pointer maintains the position of the next data block to be relocated. These pointers are advanced after each transfer of a data block, as is further explained herein. The next-free pointer in Hale constitutes the same thing as the head counter in applicant's system. Transferring data involves removing data from one location and putting/writing the data to another location. The next-block pointer points to the data block to be removed from and the next-free pointer points to the location that will receive the data meaning where the data will be placed/written. After the transfer is complete both pointers are updated. The next free-pointer is updated because the data was written to the location pointed to by this pointer and now it needs to point to next-free location to receive next data block. Hale: Col 7 In 28 - col 8 In 61: Teaches a restriping process when a new disk drive E 139 is added. For restripinq. the data of the current stripe is first temporarily stored at the end of new disk and then it is moved to the row/stripe block by block and until the entire row is restriped the next-free pointer keeps pointing to the next free location within that stripe. Once a stripe is complete the process continues for the next stripe.
Prior art Hitz, David et al. (US 20040064474 A1)[Hitz] [0016] teaches teaches a file system operating on top of a RAID subsystem tends to treat the RAID array as a large collection of blocks wherein each block is numbered sequentially across the RAID array. The data blocks of a file are then scattered across the data disks to fill each stripe as fully as possible, thereby placing each data block in a stripe on a different disk. Once N data blocks of a first stripe are allocated to N data disks of the RAID array, remaining data blocks are allocated on subsequent stripes in the same fashion until the entire file is written in the RAID array. Thus, a file is written across the data disks of a RAID system in stripes comprising modulo N data blocks. This has the disadvantage of requiring a single File to be accessed across up to N disks, thereby requiring N disks seeks. Consequently, some prior art file systems attempt to write all the data blocks of a file to a single disk.
However, no, known prior arts taken alone or in combination teaches writing a plurality of stripes i = 0 ...N to a plurality of drives 0 ... M-1, wherein each stripe i includes a plurality of data blocks and at least one parity block, and wherein a starting data block of stripe i is written to drive i modulo M.
Claims 2-8 are dependent on claim 1 and are therefore potentially allowable due at least to this dependence.
Claims 10-14 are dependent on claim 9 and are therefore potentially allowable due at least to this dependence.
Claims 16-20 are dependent on claim 15 and are therefore potentially allowable due at least to this dependence.
Conclusion
The prior arts made of record and not relied upon that is considered pertinent to applicant's disclosure is recorded in pe2e_search_notes.pdf and is attached as OA.APPENDIX.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mohammad Hasan whose telephone number is (571) 270 1737 (email: Mohammad.Hasan@uspto.gov). The examiner can normally be reached on 9am-5pm, Monday through Friday.
If attempts to reach the examiner by telephone are unsuccessful, the Examiner's supervisor, Tim Vo can be reached on 571-272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571- 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217- 9197 (toll-free).
/M.S.H/Examiner, Art Unit 2138
/SHAWN X GU/
Primary Examiner, AU2138