DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the communication filed 9/6/2023.
Claims 1-51 are presented for examination.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirely as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/14/2023. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 2 is objected to because of the following informalities:
“the first tier comprise” at line 1 of claim 2 should be: the first tier comprises.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 9, 16-20, 22-23, 30-33, 35-36, 43-45 and 47-51 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (US 9875192 B1) in view of Bohrer et al. (US 20030061352 A1, hereafter Bohrer).
Regarding to claim 1, Allen discloses: A method comprising:
performing, by at least one parallel processing unit (“PPU”), at least one data access in response to at least one data access request by (see lines 4-6 of cols. 2-3; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor, such as a graphics processing unit”, “receiving an open call from a thread of a block of threads being executed by the graphics processing unit to open the file specified by the open call”, “receiving a read call from a thread of a block of threads to read a block of data from the previously opened file”):
identifying one or more one data locations (see lines 39-53 of col. 2; “receiving a read call … first determine whether the block of data already resides in least recently used cache of the file buffer. If the block of data resides in the cache, the thread may obtain the block of data directly from the cache. Otherwise, the system may compute an input file offset, which may be based on a global file offset for the block of threads and a local file offset for the requesting thread, and copy that block of data, starting at the location corresponding to the input file offset”. Also see lines 50-29 of cols. 11-12);
accessing at least a first portion of data stored in at least a first location of the one or more data locations if the first location is on a first storage component of a plurality of data storage components that is accessible by the at least one PPU (see lines 39-53 of col. 2; “If the block of data resides in the cache, the thread may obtain the block of data directly from the cache”. Also see lines 54-58 of col. 3; “allow the program performing the computation to selectively read only the portions of the data that are actually needed for the computation”); and
causing at least one server interface to access at least a second portion of the data stored in at least a second location of the one or more data locations if the second location is on a second storage component of the plurality of data storage components (see lines 39-53 of col. 2; “determine whether the block of data already resides in least recently used cache of the file buffer … Otherwise, the system may compute an input file offset, which may be based on a global file offset for the block of threads and a local file offset for the requesting thread, and copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer”. Note: at current claim 1, the claimed “at least one server interface” is very broad that can be considered as any kind of interface to access the second location of the second tier).
Allen does not disclose:
the plurality of data storage components is a plurality of data tiers.
However, Bohrer discloses:
accessing at least a first portion of data stored in at least a first location of the one or more data locations if the first location is on a first tier of a plurality of data tiers that is accessible by the at least one processor (see [0007], [0014]-[0015]; “receives a request for a data object” and “whether the first fragment of the requested data is present (and valid) in its file cache …. format the fragment as one or more network packets”); and
causing at least one server interface to access at least a second portion of the data stored in at least a second location of the one or more data locations if the second location is on a second tier of the plurality of data tiers see (see [0007], [0014]-[0015]; “receives a request for a data object” and “whether the first fragment of the requested data is present (and valid) in its file cache …. retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server”. Note: since the lower/second tier is networked storage or system memory of another server, then there must be a server interface is used to access such networked storage or system memory of another server, such as at least switch 110 of Fig. 1 can be interpreted as such claimed server interface for accessing networked storage 133 or the bus 125/127 as shown by Fig. 2 of another server).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify retrieving different portions of requested file or data from different storage components from Allen by including retrieving different portions of requested file or data from different data tiers including local cache tier and networked storage tier from Bohrer, and thus the combination of Allen and Bohrer would disclose the missing limitations from Allen, since it would provide a mechanism of reducing storage cost for data (see [0014] from Bohrer; “storing only a portion or fragment of a cached file in the actual file cache while storing the remainder of the file or data in a lower tier of storage. The file cache typically comprises a portion of the server's volatile system memory while the lower tier of storage is typically a slower and less expensive form of storage”).
Regarding to Claim 2, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: the first tier comprise at least one cache stored in volatile memory (see lines 4-6 of col. 5 and lines 39-55 of col. 9 from Allen; “The system memory 110 may represent physical memory of the host computing system, which may include random access memory” and “The least recently used cache 316 for read pages may be one or more buffers in system memory”. Also see [0007] and [0014] from Bohrer; “The file cache typically comprises a portion of the server's volatile system memory”)
Regarding to Claim 4, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: waiting, by the at least one PPU, to allocate space in a cache for the second portion of the data until after the at least one server interface accesses the second portion (see steps 612-616 of Fig. 6, lines 39-53 of col. 2, lines 59-14 of cols. 13-14 from Allen; “determine whether the block of data already resides in least recently used cache of the file buffer … Otherwise, the system may compute an input file offset, which may be based on a global file offset for the block of threads and a local file offset for the requesting thread, and copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer” and “ obtaining the requested data from wherever it resides. First, the system may allocate a read page in cache … This address will be used for storing the retrieved data”).
Regarding to Claim 9, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: wherein the at least one server interface is to access the data within at least one particular tier of the plurality of data tiers other than the first tier, and the at least one particular tier comprises at least one of a plurality of different storage locations operating according to different storage methods (see Fig. 1 and [0007] from Bohrer; “the server determines whether the first fragment of the requested data is present (and valid) in its file cache … the server retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server”. Note: networked storage and system memory of another server are reasonable to be considered as claimed “a plurality of different storage locations operating according to different storage methods”).
Regarding to Claim 16, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: performing, by the at least one PPU, at least one process that performs the at least one data access in response to the at least one data access request, the at least one data access request to originate from an application executing within a trusted execution environment; and extending the trusted execution environment to include the at least one process (see lines 12-14 and 39-53 of col. 2 from Allen; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor” and “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer”. It is understood that a virtual machine is reasonable to be considered as a trusted execution environment. In addition, since such data accessing process is performed in response to the requests from the VM or trusted execution environment, and thus it is extending the VM or trusted execution environment to include such data accessing process).
Regarding to Claim 17, the rejection of Claim 16 is incorporated and further the combination of Allen and Bohrer discloses: extending the trusted execution environment to include the at least one server interface (see Fig. 1 and [0007] from Bohrer; “the server determines whether the first fragment of the requested data is present (and valid) in its file cache … the server retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server”. Since at least the switch 110 of Fig. 1 as claimed server interface is triggered to perform the data access operation in response to the requests from the VM or trusted execution environment, and thus such at least the switch 110 or claimed server interface is extended to be included to the VM or trusted execution environment).
Regarding to Claim 18, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: wherein the at least one data access comprises at least 100 data accesses performed in parallel by the at least one PPU (see lines 35-44, lines 64-65 of col. 3 and lines 40-42 of col. 13 from Allen; “A graphics processing unit … may be beneficial for performing highly parallel tasks; that is, where a large number (e.g., thousands) of tasks may be processed in parallel in a similar fashion. As an example, a customer of a computing resource service provider having access to a virtual machine may desire to process a large data set that has been stored at a location accessible to the virtual machine, such as in a block-level data store attached to the virtual machine”, “a graphics processing unit may be able to process 8,000 threads in parallel” and “the thread of the block of threads may be operating in parallel, each thread of the block of threads may individually issue such a read call”. At the combination system, it is reasonable to conclude that there are at least 100 read calls requested by at least 100 threads performed in parallel by the graphics processing units in one of the reasonable embodiments).
Regarding to Claim 19, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: transforming, by the at least one PPU, at least a portion of the first portion of the data (see lines 20-31 of col. 10 from Allen; “Once the data is in the least recently used cache buffers in the file buffer, the data may be read by the graphics processing unit … As the read data is processed, in 410, the block of threads may, output the processed data to a file by executing a set of write calls containing the processed data. The processed data may be written by the graphics processing unit to a ring buffer of the file buffer of 404. Once the ring buffer is full or once it is determined that the ring buffer should be flushed, the processed data in the ring buffer may be flushed to the output file in 412”. At least a portion of the data read from the cache, i.e., claimed first portion of the data, is written into an output file, i.e., is transformed)
Regarding to Claim 20, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: wherein the at least one server interface transforms at least a portion of the second portion of the data (see Fig. 1 and [0007] from Bohrer, “the server retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server”. At least the switch 110 as claimed at least one server interface transforms the second portion of data from the networked storage or system memory of another server to the server having another portion of data).
Regarding to Claim 22, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: wherein the first tier comprises volatile memory (see lines 4-6 of col. 5 and lines 39-55 of col. 9 from Allen; “The system memory 110 may represent physical memory of the host computing system, which may include random access memory” and “The least recently used cache 316 for read pages may be one or more buffers in system memory”. Also see [0007] and [0014] from Bohrer; “The file cache typically comprises a portion of the server's volatile system memory”), the second tier comprises non-volatile memory (see [0007] and [0021] from Bohrer; “the second tier may represent a local disk, non-volatile networked storage”), and the at least one data access request requests information stored on the second tier in a manner identical to that used by the at least one data access request to request information stored on the first tier (see lines 12-14 and 39-53 of col. 2 from Allen; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor” and “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer”. At the combination system, no matter it is the data request to access volatile memory cache tier or to access non-volatile networked storage, the data request is issued by thread of block of threads, i.e., in a same/identical manner).
Regarding to Claim 23, Claim 23 is a system claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above (note: Allen also teaches limitations of “memory storing first instructions … cause the system to perform”, see lines 19-25 of col. 19 from Allen; “Each server typically … include a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions”).
Regarding to Claim 30, the rejection of Claim 23 is incorporated and further the combination of Allen and Bohrer discloses: wherein the at least one data access request originates from an application executing within a trusted execution environment, and the first instructions if performed by the at least one PPU, cause the system to extend the trusted execution environment to include at least a portion of one of the plurality of data tiers storing at least a portion of the data (see lines 12-14 and 39-53 of col. 2 from Allen and [0007] from Bohrer; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor”, “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer” and “a first portion or fragment of a requested data object in a first tier of storage while retaining subsequent portions of the data object in a second or lower tier of storage”. It is understood that a virtual machine is reasonable to be considered as a trusted execution environment. In addition, since the different portions from the different data tiers are retrieved in response to the requests from the VM or trusted execution environment, and thus it is extending the VM or trusted execution environment to include the retrieved portions of data).
Regarding to Claim 31, Claim 31 is a system claim corresponds to method Claim 16 and is rejected for the same reason set forth in the rejection of Claim 16 above.
Regarding to Claim 32, Claim 32 is a system claim corresponds to method Claim 17 and is rejected for the same reason set forth in the rejection of Claim 17 above.
Regarding to Claim 33, the rejection of Claim 23 is incorporated and further the combination of Allen and Bohrer discloses: wherein the first instructions, if performed by the at least one PPU, cause the system to identify the one or more one data locations (see lines 39-53 of col. 2 and lines 39-55 of col. 9 from Allen; “receiving a read call … first determine whether the block of data already resides in least recently used cache of the file buffer. If the block of data resides in the cache, the thread may obtain the block of data directly from the cache. Otherwise, the system may compute an input file offset, which may be based on a global file offset for the block of threads and a local file offset for the requesting thread, and copy that block of data, starting at the location corresponding to the input file offset”).
Regarding to Claim 35, the rejection of Claim 23 is incorporated and further the combination of Allen and Bohrer discloses: at least one processor implementing at least one server node that implements the at least one server interface, the at least one PPU implementing at least one client node that implements at least one client interface that performs the at least one data access (see Fig. 1 and [0007] from Bohrer; “the server determines whether the first fragment of the requested data is present (and valid) in its file cache … the server retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server”. At the combination system, the server contains at least one PPU of client node to implement client interface to access data from cache and the processor of another server or switch to implement sever interface).
Regarding to Claim 36, Claim 36 is a system claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above (note: Allen also teaches limitations of “(“PPU”) comprising: one or more circuits”, see lines 63-65 of col. 8 of Allen; “The graphics processing unit 302 may be an electronic circuit configured to process sets of data in a highly parallel fashion”).
Regarding to Claim 43, the rejection of Claim 36 is incorporated and further the combination of Allen and Bohrer discloses: wherein the at least one data access request originates from an application executing within a trusted execution environment, and the one or more circuits are to extend the trusted execution environment to include at least a portion of one of the plurality of data tiers storing at least a portion of the data (see lines 12-14 and 39-53 of col. 2 from Allen and [0007] from Bohrer; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor”, “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer” and “a first portion or fragment of a requested data object in a first tier of storage while retaining subsequent portions of the data object in a second or lower tier of storage”. It is understood that a virtual machine is reasonable to be considered as a trusted execution environment. In addition, since the different portions from the different data tiers are retrieved in response to the requests from the VM or trusted execution environment, and thus it is extending the VM or trusted execution environment to include the retrieved portions of data).
Regarding to Claim 44, Claim 44 is a system claim corresponds to method Claim 16 and is rejected for the same reason set forth in the rejection of Claim 16 above.
Regarding to Claim 45, Claim 45 is a system claim corresponds to method Claim 17 and is rejected for the same reason set forth in the rejection of Claim 17 above.
Regarding to Claim 47, Claim 47 is a system claim corresponds to method Claim 22 and is rejected for the same reason set forth in the rejection of Claim 22 above.
Regarding to Claim 48, Allen discloses: A processor comprising: one or more circuits to (see lines 9-15 of col. 2 and lines 63-65 of col. 8; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor, such as a graphics processing unit” and “The graphics processing unit 302 may be an electronic circuit configured to process sets of data in a highly parallel fashion”):
receive a request including a reference used by an application to access local volatile memory (see lines 33-50 of col. 13; “Each read call may include information usable to compute the location in the file from which data is to be read, such as a thread ID and/or local file offset. Additionally or alternatively, flow control offsets may be provided with the read call as well”. Also see lines 4-6 of col. 5 and lines 39-55 of col. 9; “The system memory 110 may represent physical memory of the host computing system, which may include random access memory” and “The least recently used cache 316 for read pages may be one or more buffers in system memory”. It is understood that the system memory is a type of local volatile memory);
use the reference to identify a data location within multiple data storage components comprising volatile memory directly accessible by the processor, and other data storage component (see lines 39-53 of col. 2 and lines 39-55 of col. 9; “receiving a read call … first determine whether the block of data already resides in least recently used cache of the file buffer. If the block of data resides in the cache, the thread may obtain the block of data directly from the cache. Otherwise, the system may compute an input file offset, which may be based on a global file offset for the block of threads and a local file offset for the requesting thread, and copy that block of data, starting at the location corresponding to the input file offset” and “The least recently used cache 316 for read pages may be one or more buffers in system memory”. Note: since the least recently used cache is part of the system memory, and thus it is volatile memory directly accessible by the highly parallel processor, such as a graphics processing unit);
access the data location if the data location is within the volatile memory directly accessible by the processor (see lines 39-53 of col. 2 and lines 39-55 of col. 9; “receiving a read call … first determine whether the block of data already resides in least recently used cache of the file buffer. If the block of data resides in the cache, the thread may obtain the block of data directly from the cache); and
cause a service to access the data location if the data location is within the other data storage component (see lines 39-53 of col. 2; “receiving a read call … first determine whether the block of data already resides in least recently used cache of the file buffer … Otherwise, the system may compute an input file offset, which may be based on a global file offset for the block of threads and a local file offset for the requesting thread, and copy that block of data, starting at the location corresponding to the input file offset”).
Allen does not disclose:
multiple data storage components are multiple data tiers and other data storage component is non-volatile memory.
However, Bohrer discloses:
identify a data location within multiple data tiers comprising volatile memory directly accessible by the processor, and non-volatile memory (see [0007]; “The first tier is typically the server's volatile system memory while the second tier may represent a local disk, non-volatile networked storage, or a remote system memory. When the server receives a request for a data object”, “determines whether the first fragment of the requested data is present (and valid) in its file cache”. Note: since the first tier or file cache from [0007] is actually the server’s volatile system memory, and thus such first tier or file cache is directly accessible by the processor of the server);
access the data location if the data location is within the volatile memory directly accessible by the processor (see [0007] and [0015]; “determines whether the first fragment of the requested data is present (and valid) in its file cache. If the first fragment is valid in the file cache, the server may format the fragment as one or more network packets” and “the server device responds by retrieving the first fragment of the file from the file cache”); and
cause a service to access the data location if the data location is within the non-volatile memory see (see [0007], [0014]-[0015]; “the second tier may represent a local disk, non-volatile networked storage” and “whether the first fragment of the requested data is present (and valid) in its file cache …. retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify retrieving different portions of requested file or data from different storage components from Allen by including retrieving different portions of requested file or data from different data tiers including local cache tier and networked storage tier from Bohrer, and thus the combination of Allen and Bohrer would disclose the missing limitations from Allen, since it would provide a mechanism of reducing storage cost for data (see [0014] from Bohrer; “storing only a portion or fragment of a cached file in the actual file cache while storing the remainder of the file or data in a lower tier of storage. The file cache typically comprises a portion of the server's volatile system memory while the lower tier of storage is typically a slower and less expensive form of storage”).
Regarding to Claim 49, the rejection of Claim 48 is incorporated and further the combination of Allen and Bohrer discloses: cause a different processor to access the data location if the data location is within different volatile memory directly accessible by the different processor (see [0007] and [0019] from Bohrer “the second tier may represent a local disk, non-volatile networked storage, or a remote system memory … the server retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server” and “System memory 122 typically represents the server's dynamic random access memory (DRAM) or other volatile storage structure”. Note: it is understood that the system memory of another server is directly accessible by processor of such another server, i.e., claimed different processor).
Regarding to Claim 50, the rejection of Claim 48 is incorporated and further the combination of Allen and Bohrer discloses: wherein the non-volatile memory is remote with respect to the processor (see [0007] from Bohrer; “The first tier is typically the server's volatile system memory while the second tier may represent a local disk, non-volatile networked storage”. A non-volatile networked storage is remote from the server’s processor, i.e., claimed the processor).
Regarding to Claim 51, the rejection of Claim 48 is incorporated and further the combination of Allen and Bohrer discloses: cause the service to access the data location if the data location is within different volatile memory directly accessible by the service (see [0007] and [0019] from Bohrer “the second tier may represent a local disk, non-volatile networked storage, or a remote system memory … the server retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server” and “System memory 122 typically represents the server's dynamic random access memory (DRAM) or other volatile storage structure”).
Claims 3, 34, 46 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (US 9875192 B1) in view of Bohrer et al. (US 20030061352 A1, hereafter Bohrer) and further in view of Radi et al. (US 20230251929 A1, hereafter Radi).
Regarding to Claim 3, the rejection of Claim 1 is incorporated, the combination of Allen and Bohrer does not disclose: computing a third portion of the data if the third portion is not stored at the one or more one data locations or the at least one PPU determines it would take too long to access the third portion in the plurality of data tiers.
However, Radi discloses: computing a third portion of the data if the third portion is not stored at the one or more one data locations or the at least one processor determines it would take too long to access the third portion in the plurality of data storage components (see [0038]-[0039]; “lost or corrupted data blocks can be recovered for up to N lost or corrupted data blocks out of a total of M data blocks when using an EC algorithm … The number of M data blocks corresponds to the overhead in terms of processing and memory resources in calculating the parity blocks and recovering missing or corrupted data blocks”. In one of the reasonable embodiments, a third portion of data is missing at the data locations, and then computing or recovering such missing third portion of data).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the operation of data access and retrieve from the combination of Allen and Bohrer by including recovering missing data blocks from Radi, and thus the combination of Allen, Bohrer and Radi would disclose the missing limitation of the combination of Allen and Bohrer, since it would provide a mechanism to recovery missing or corrupted data blocks (see [0038] from Radi).
Regarding to Claim 34, Claim 34 is a system claim corresponds to method Claim 3 and is rejected for the same reason set forth in the rejection of Claim 3 above.
Regarding to Claim 46, Claim 46 is a system claim corresponds to method Claim 3 and is rejected for the same reason set forth in the rejection of Claim 3 above.
Claim 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (US 9875192 B1) in view of Bohrer et al. (US 20030061352 A1, hereafter Bohrer) and further in view of Brandyberry et al. (US 20090094445 A1, hereafter Brandyberry).
Regarding to Claim 5, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses:
performing an application using first hardware comprising the at least one PPU (see lines 9-15 of col. 2 from Allen; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor, such as a graphics processing unit”);
storing accessed data obtained based at least in part on the at least one data access in at least one memory associated with the at least one PPU (see lines 39-53 of col. 2 and lines 39-55 of col. 9 from Allen; “copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer” and “The least recently used cache 316 for read pages may be one or more buffers in system memory”).
The combination of Allen and Bohrer does not disclose:
pausing performance of the application by the first hardware;
moving the application and the accessed data to different second hardware; and
resuming performance of the application on the different second hardware.
However, Brandyberry discloses:
performing an application using first hardware comprising the at least one processor (see [0022]; “server 104 supports a software partition having one or more applications running in the software partition. The term “running” refers to a processor actively executing the instructions of a process”);
storing accessed data obtained in at least one memory associated with the at least one processor (see [0037]-[0038]; “the application state and memory contents for an application”. Also see [0042]; “the range of discrete memory space addresses that identify a physical location in computer memory for storing data associated with an application. The data associated with the application -includes, but is not limited to, the executable code for the application, any data in stack or heap memory, and any other data associated with the application”);
pausing performance of the application by the first hardware (see [0037] and [0046]; “A checkpoint operation …. when a software partition is migrated from one physical computing device to another physical computing device” and “in response to receiving a checkpoint signal by a plurality of threads associated with an application running in a software partition, the plurality of threads rendezvous to a point outside an application text associated with the application. The term rendezvous refers to the threads meeting at a common point. Rendezvousing the plurality of threads suspends execution of application text by the plurality of threads”);
moving the application and the accessed data to different second hardware (see [0037]; “A checkpoint operation is a data integrity operation in which the application state and memory contents for an application are written to stable storage at a particular time to provide a basis upon which to recreate the state of an application and/or processes running in a software partition, such as when a software partition is migrated from one physical computing device to another physical computing device”. Note: the application is running within the software partition, and thus the application is also moved/migrated to another physical computing device when “a software partition” associated with the application is migrated “to another physical computing device”); and
resuming performance of the application on the different second hardware (see [0037]- [0039]; “The processes running in the departure software partition are restored or restarted on the arrival software partition”. Also see [0054]; “restore or restart processes 320-322 on arrival server 304 in the same state that processes 320-322 were in on departure server 302 at the time checkpoint data 330 was last saved”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the operation of virtual machine running application from the combination of Allen and Bohrer by including migration of application and associated state to another device for execution from Brandyberry, and thus the combination of Allen, Bohrer and Brandyberry would disclose the missing limitation of the combination of Allen and Bohrer, since it would provide a mechanism to restore the state of an application in the event of a failure (see [0006]-[0007] from Brandyberry).
Regarding to Claim 6, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses:
performing an application using first hardware comprising the at least one PPU (see lines 9-15 of col. 2 from Allen; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor, such as a graphics processing unit”);
storing accessed data obtained based at least in part on the at least one data access in different second hardware (see lines 39-53 of col. 2 and lines 39-55 of col. 9 from Allen; “copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer” and “The least recently used cache 316 for read pages may be one or more buffers in system memory”).
The combination of Allen and Bohrer does not disclose:
pausing performance of the application by the first hardware;
moving the application to different third hardware; and
resuming performance of the application on the different third hardware.
However, Brandyberry discloses:
performing an application using first hardware comprising the at least one processor (see [0022]; “server 104 supports a software partition having one or more applications running in the software partition. The term “running” refers to a processor actively executing the instructions of a process”);
storing accessed data obtained in different second hardware (see [0037]-[0038]; “the application state and memory contents for an application”. Also see [0042]; “the range of discrete memory space addresses that identify a physical location in computer memory for storing data associated with an application. The data associated with the application -includes, but is not limited to, the executable code for the application, any data in stack or heap memory, and any other data associated with the application”);
pausing performance of the application by the first hardware (see [0037] and [0046]; “A checkpoint operation …. when a software partition is migrated from one physical computing device to another physical computing device” and “in response to receiving a checkpoint signal by a plurality of threads associated with an application running in a software partition, the plurality of threads rendezvous to a point outside an application text associated with the application. The term rendezvous refers to the threads meeting at a common point. Rendezvousing the plurality of threads suspends execution of application text by the plurality of threads”);
moving the application to different third hardware (see [0037]; “A checkpoint operation is a data integrity operation in which the application state and memory contents for an application are written to stable storage at a particular time to provide a basis upon which to recreate the state of an application and/or processes running in a software partition, such as when a software partition is migrated from one physical computing device to another physical computing device”. Note: the application is running within the software partition, and thus the application is also moved/migrated to another physical computing device when “a software partition” associated with the application is migrated “to another physical computing device”); and
resuming performance of the application on the different third hardware (see [0037]- [0039]; “The processes running in the departure software partition are restored or restarted on the arrival software partition”. Also see [0054]; “restore or restart processes 320-322 on arrival server 304 in the same state that processes 320-322 were in on departure server 302 at the time checkpoint data 330 was last saved”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the operation of virtual machine running application from the combination of Allen and Bohrer by including migration of application and associated state to another device for execution from Brandyberry, and thus the combination of Allen, Bohrer and Brandyberry would disclose the missing limitation of the combination of Allen and Bohrer, since it would provide a mechanism to restore the state of an application in the event of a failure (see [0006]-[0007] from Brandyberry).
Claim 7-8, 15, 24-25, 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (US 9875192 B1) in view of Bohrer et al. (US 20030061352 A1, hereafter Bohrer) and further in view of Katyal et al. (US 20220027180 A1, hereafter Katyal).
Regarding to Claim 7, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: performing, by the at least one PPU, at least one process in response to at least one call [to at least one function of an Application Programming Interface (“API”)] by an application, the at least one process performing the at least one data access in response to the at least one data access request (see lines 12-14 and 39-53 of col. 2 from Allen; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor” and “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer”).
The combination of Allen and Bohrer does not disclose:
at least one call to at least one function of an Application Programming Interface (“API”) by an application.
However, Katyal discloses: performing, by the at least one processor, at least one process in response to at least one call to at least one function of an Application Programming Interface (“API”) by an application, the at least one process performing the at least one data access in response to the at least one data access request (see [0016]; “VM1 118 may include application program interfaces (APIs) 126, including one or more APIs that operate with the application(s) 124 to issue API calls to request data for use by the application 124(s), to access data from storage, etc”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the data reading or access operations from the combination of Allen and Bohrer by including calling API functions to perform data access operations from Katyal, and thus the combination of Allen, Bohrer and Katyal would disclose the missing limitation of the combination of Allen and Bohrer, since API is well-known and understood intermediate between at least two different software applications or systems to provide services.
Regarding to Claim 8, the rejection of Claim 7 is incorporated and further the combination of Allen, Bohrer and Katyal discloses: wherein the at least one process is being performed by a first computing node and the at least one server interface is being performed by a different second computing node (see Fig. 1 and [0007] from Bohrer; “the server determines whether the first fragment of the requested data is present (and valid) in its file cache … the server retrieves a subsequent fragment of the requested data object from a lower tier of storage such as a local disk, networked storage, or the system memory of another server”. Note: at least the switch 110 of Fig. 1 can be considered as claimed server interface performed by a different second computing node).
Regarding to Claim 15, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: wherein the at least one data access request originates from an application executing within a trusted execution environment, and the method further comprises: [using an Application Programming Interface (“API”) to] extend the trusted execution environment to include at least a portion of one of the plurality of data tiers storing at least a portion of the data (see lines 12-14 and 39-53 of col. 2 from Allen and [0007] from Bohrer; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor”, “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer” and “a first portion or fragment of a requested data object in a first tier of storage while retaining subsequent portions of the data object in a second or lower tier of storage”. It is understood that a virtual machine is reasonable to be considered as a trusted execution environment. In addition, since the different portions from the different data tiers are retrieved in response to the requests from the VM or trusted execution environment, and thus it is extending the VM or trusted execution environment to include the retrieved portions of data).
The combination of Allen and Bohrer does not disclose: using an Application Programming Interface (“API”) to extend the trusted execution environment to include at least a portion of one of the plurality of data tiers storing at least a portion of the data.
However, Katyal discloses: using an Application Programming Interface (“API”) to extend the trusted execution environment to include at least a portion of one of the plurality of data storages storing at least a portion of the data (see [0016]; “VM1 118 may include application program interfaces (APIs) 126, including one or more APIs that operate with the application(s) 124 to issue API calls to request data for use by the application 124(s), to access data from storage, etc”. Also see [0045]-[0046]; “the storage manager 140 or the CNS CSI driver 212 determines whether the requested data file is locally cached in the datastore 202 of the private cloud storage system” and “if the requested data file is determined to be absent from the cache (e.g., not cached previously) (“NO” at the block 306), then the storage manager 140 or the CNS CSI driver 212 passes the API call to the public cloud storage system 164 so that the public cloud storage system 164 can provide the requested data file to the VM/container”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the data reading or access operations from the combination of Allen and Bohrer by including calling API functions to perform data access operations from Katyal, and thus the combination of Allen, Bohrer and Katyal would disclose the missing limitation of the combination of Allen and Bohrer, since API is well-known and understood intermediate between at least two different software applications or systems to provide services.
Regarding to Claim 24, Claim 23 is a system claim corresponds to method Claim 7 and is rejected for the same reason set forth in the rejection of Claim 7 above.
Regarding to Claim 25, the rejection of Claim 24 is incorporated and further the combination of Allen, Bohrer and Katyal discloses: wherein the at least one data access request originates from at least one of the application or the API (see lines 12-14 and 39-53 of col. 2 from Allen; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor” and “receiving a read call from a thread of a block of threads to read a block of data”).
Regarding to Claim 37, Claim 37 is a system claim corresponds to method Claim 7 and is rejected for the same reason set forth in the rejection of Claim 7 above.
Regarding to Claim 38, the rejection of Claim 37 is incorporated and further the combination of Allen, Bohrer and Katyal discloses: wherein the at least one data access request originates from at least one of the application or the API (see lines 12-14 and 39-53 of col. 2 from Allen; “the virtual machine may be configured to launch an and run application that executes a plurality of threads in parallel in a highly parallel processor” and “receiving a read call from a thread of a block of threads to read a block of data”).
Claim 10-11, 26-27, 39-40 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (US 9875192 B1) in view of Bohrer et al. (US 20030061352 A1, hereafter Bohrer) and further in view of Rangaswami et al. (US 20170091055 A1, hereafter Rangaswami).
Regarding to Claim 10, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: performing, by the at least one PPU, at least one process that performs the at least one data access in response to the at least one data access request (see lines 39-53 of col. 2 from Allen; “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer”).
The combination of Allen and Bohrer does not disclose: the at least one data access comprising a synchronous data access that waits for the synchronous data access to complete before the at least one process continues processing.
However, Rangaswami discloses: the at least one data access comprising a synchronous data access that waits for the synchronous data access to complete before the at least one process continues processing (see [0032]-[0033] and [0036]; “when a request is issued according to a synchronous or asynchronous mode”, “In a synchronous operation, the instructions of the operation execute in a serial progression, where each instruction is completely performed prior to continuing to the next instruction or function. For example, when an instruction in function A calls a function B, function A waits for function B to complete the entirety of its instructions before function A continues with the instruction after the call to function B” and “a synchronous call to the LA-IFD 220 to store the data segment (202)”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the data access operations from the combination of Allen and Bohrer by including synchronous or asynchronous data access operation from Rangaswami, and thus the combination of Allen, Bohrer and Rangaswami would disclose the missing limitation of the combination of Allen and Bohrer, since a request can be classified as either one of synchronous mode or asynchronous mode (see [0032]-[0033] from Rangaswami).
Regarding to Claim 11, the rejection of Claim 1 is incorporated and further the combination of Allen and Bohrer discloses: performing, by the at least one PPU, at least one process that performs the at least one data access in response to the at least one data access request (see lines 39-53 of col. 2 from Allen; “receiving a read call from a thread of a block of threads to read a block of data … the thread may obtain the block of data directly from the cache. Otherwise … copy that block of data, starting at the location corresponding to the input file offset, into a read page allocated to the least recently used cache of the file buffer”).
The combination of Allen and Bohrer does not disclose: the at least one data access comprising an asynchronous data access that provides at least one notification to at least one recipient indicating whether the asynchronous data access was completed successfully, the at least one process being allowed to continue processing before the asynchronous data access is complete.
However, Rangaswami discloses: the at least one data access comprising an asynchronous data access that provides at least one notification to at least one recipient indicating whether the asynchronous data access was completed successfully, the at least one process being allowed to continue processing before the asynchronous data access is complete (see [0032]-[0033] and [0037]; “when a request is issued according to a synchronous or asynchronous mode”, “an asynchronous operation is characterized by return of control to the caller before the full scope of the operation has been completed. For example, if function B is an asynchronous function, function B immediately returns control to function A, even though function B may merely initiate the process of performing its work. In many implementations, an asynchronous operation may be performed by initiating an additional “thread” of execution according to existing mechanisms provided by the operating system” and “An update of the data segment is initiated at the remote storage system 230 as an asynchronous operation (204)”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the data access operations from the combination of Allen and Bohrer by including synchronous or asynchronous data access operation from Rangaswami, and thus the combination of Allen, Bohrer and Rangaswami would disclose the missing limitation of the combination of Allen and Bohrer, since a request can be classified as either one of synchronous mode or asynchronous mode (see [0032]-[0033] from Rangaswami).
Regarding to Claim 26, Claim 26 is a system claim corresponds to method Claim 10 and is rejected for the same reason set forth in the rejection of Claim 10 above.
Regarding to Claim 27, Claim 27 is a system claim corresponds to method Claim 11 and is rejected for the same reason set forth in the rejection of Claim 11 above.
Regarding to Claim 39, Claim 39 is a system claim corresponds to method Claim 10 and is rejected for the same reason set forth in the rejection of Claim 10 above.
Regarding to Claim 40, Claim 40 is a system claim corresponds to method Claim 11 and is rejected for the same reason set forth in the rejection of Claim 11 above.
Claims 12-14, 28-29, 41-42 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (US 9875192 B1) in view of Bohrer et al. (US 20030061352 A1, hereafter Bohrer) and Rangaswami et al. (US 20170091055 A1, hereafter Rangaswami) and further in view of Brewer (US 20190340022 A1).
Regarding to Claim 12, the rejection of Claim 11 is incorporated and further the combination of Allen, Bohrer and Rangaswami discloses: comprises a different thread for each of a plurality of data elements (see lines 33-42 of col. 13 from Allen; “a call may be received by the system performing the process 600 for the thread of a block of threads to read from a file … each thread of the block of threads may individually issue such a read call”).
The combination of Allen, Bohrer and Rangaswami does not disclose: wherein the at least one notification comprises a different notification for each of a plurality of data elements.
However, Brewer discloses: wherein the at least one notification comprises a different notification for each of a plurality of thread (see [0148]; “The sent event can be a point-to-point message with a single destination thread, or a broadcast message sent to all threads within a group of processing resources belonging to the same process”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the operations of data access operations issued by different threads from the combination of Allen, Bohrer and Rangaswami by including a point-to-point message with same thread from Brewer, and thus the combination of Allen, Bohrer, Rangaswami and Brewer would disclose the missing limitation of the combination of Allen, Bohrer and Rangaswami, since it would provide a flexibility of selecting either one of one-to-one response mechanism to threads or one-to-call response mechanism to threads (see [0148] from Brewer).
Regarding to Claim 13, the rejection of Claim 11 is incorporated and further the combination of Allen, Bohrer and Rangaswami discloses: wherein the at least one data access request comprises a plurality of data access requests originating from a plurality of threads (see lines 39-41 of col. 2 and lines 40-42 of col. 13 from Allen; “a read call from a thread of a block of threads to read a block of data” and “the thread of the block of threads may be operating in parallel, each thread of the block of threads may individually issue such a read call”).
The combination of Allen, Bohrer and Rangaswami does not disclose: the at least one notification comprises a different notification for each of the plurality of threads.
However, Brewer discloses: the at least one notification comprises a different notification for each of the plurality of threads (see [0148]; “The sent event can be a point-to-point message with a single destination thread, or a broadcast message sent to all threads within a group of processing resources belonging to the same process”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the operations of data access operations issued by different threads from the combination of Allen, Bohrer and Rangaswami by including a point-to-point message with same thread from Brewer, and thus the combination of Allen, Bohrer, Rangaswami and Brewer would disclose the missing limitation of the combination of Allen, Bohrer and Rangaswami, since it would provide a flexibility of selecting either one of one-to-one response mechanism to threads or one-to-call response mechanism to threads (see [0148] from Brewer).
Regarding to Claim 14, the rejection of Claim 11 is incorporated and further the combination of Allen, Bohrer and Rangaswami discloses: wherein the at least one data access request comprises a plurality of data access requests originating from a plurality of threads (see lines 39-41 of col. 2 and lines 40-42 of col. 13 from Allen; “a read call from a thread of a block of threads to read a block of data” and “the thread of the block of threads may be operating in parallel, each thread of the block of threads may individually issue such a read call”).
The combination of Allen, Bohrer and Rangaswami does not disclose: the at least one notification comprises a single notification to be provided to the plurality of threads.
However, Brewer discloses: the at least one notification comprises a single notification to be provided to the plurality of threads (see [0148]; “The sent event can be a point-to-point message with a single destination thread, or a broadcast message sent to all threads within a group of processing resources belonging to the same process”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the operations of data access operations issued by different threads from the combination of Allen, Bohrer and Rangaswami by including broadcase message sent to all threads within same process from Brewer, and thus the combination of Allen, Bohrer, Rangaswami and Brewer would disclose the missing limitation of the combination of Allen, Bohrer and Rangaswami, since it would provide a flexibility of selecting either one of one-to-one response mechanism to threads or one-to-call response mechanism to threads (see [0148] from Brewer).
Regarding to Claim 28, Claim 28 is a system claim corresponds to method Claim 12 and is rejected for the same reason set forth in the rejection of Claim 12 above.
Regarding to Claim 29, Claim 29 is a system claim corresponds to method Claim 13 or 14 and is rejected for the same reason set forth in the rejection of Claim 13 or 14 above.
Regarding to Claim 41, Claim 41 is a system claim corresponds to method Claim 12 and is rejected for the same reason set forth in the rejection of Claim 12 above.
Regarding to Claim 42, Claim 42 is a system claim corresponds to method Claim 13 or 14 and is rejected for the same reason set forth in the rejection of Claim 13 or 14 above.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Allen (US 9875192 B1) in view of Bohrer et al. (US 20030061352 A1, hereafter Bohrer) and further in view of Antwerpen et al. (US 20180137294 A1, hereafter Antwerpen).
Regarding to Claim 21, the rejection of Claim 1 is incorporated, the combination of Allen and Bohrer does not disclose: wherein the at least one PPU implements at least one node that implements a plurality of client interfaces that performs the at least one data access.
However, Antwerpen discloses: wherein the at least one processor implements at least one node that implements a plurality of client interfaces that performs the at least one data access (see [0125]; “Any data/command transfers through interfaces 1032 a and 1032 b to the XIP address space either access SRAM caches … If any of the interfaces 1032 a and 1032 b are configured with a SRAM cache, such cache may be used to cache read data” Also see [0123]; “External memory controller block 1030 is a hardware block similar to external memory controller 130 in FIG. 1”. Such external memory controller block 1030 is reasonable to be considered as processor).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the accessing on memory cache to retrieve data from the combination of Allen and Bohrer by including at least two different interfaces configure with a SRAM cache to perform data access operations from Antwerpen, and thus the combination of Allen, Bohrer and Antwerpen would disclose the missing limitation of the combination of Allen and Bohrer, since it would provide different data operation techniques based on different needs (see [0032]-[0033] and [0125] from Antwerpen; “by using a strong (but relatively slow) encryption algorithm”, “by using a weak (but fast) encryption function” and “Fast XIP interface 1032 a and slow XIP interface 1032 b”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Thomas (US 20140136782 A1) discloses: execute an operating system configured to store objects at a local storage tier, a local network storage tier, or a remote network storage tier (see claim 1) and store a portion of data object on at least two different storage tiers (see [0031]).
Green et al. (US 20110197039 A1) discloses: transferring the state of the virtual machine and resuming execution (see [0001]).
Mitra et al. (US 20210279087 A1) discloses: the Apps 230 use the API 220 to access data sources outside the containerized runtime 240 (see [0064]).
Dally (US 20210048992 A1) discloses: wherein the at least one PPU implements at least one node that implements a plurality of client interfaces (see Figs. 2, 3, [0033] and [0038]).
Moreton et al. (US 20170323475 A1) discloses: wherein the at least one PPU implements at least one node that implements a plurality of client interfaces (see Fig. 8 and [0094]; “The L2 cache 865 is connected to one or more memory interfaces 880. Memory interfaces 880 implement 16, 32, 64, 128-bit data buses, or the like, for high-speed data transfer”).
Jess (US 20100115198 A1) discloses: calculating the missing segment using parity and writing the recovered segment to the replacement drive (see [0009]).
Igashira et al. (US 20130262762 A1) discloses: the command issuance control unit 222 reproduces the missing data segment by calculating it from the other data segments and parity data that share the same stripe number (see [0108]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHI CHEN whose telephone number is (571)272-0805. The examiner can normally be reached on M-F from 9:30AM to 5:30PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y Blair can be reached on 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Zhi Chen/
Patent Examiner, AU2196
/APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196