DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite systems and methods for managing data and processing requests.
The limitations in Independent Claims 1 and 8 of managing data access, and in Independent Claim 15 of managing a processing request, as drafted, are processes that, under their broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitations of “the entry identifying that a data is stored in a location, the location including one of the first memory or the second memory,” in Claim 1; “identifying a data structure based on the data access request,” “identifying an entry in the data structure based on the data access request” and “identifying a location storing the data based on the entry in the data structure” in Claim 8; and “performing an analysis of the processing request to determine a target to execute the processing request, the target including the first processor or a second processor associated with a device” in Claim 15, as drafted, are processes that, under their broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation judgment and/or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas.
This judicial exception is not integrated into a practical application. The claims recite the following additional elements “receiving a data access request for a data from an application running on a processor” in Claim 8 and “receiving a processing request from an application running on a first processor, the processing request to be applied to a data” in Claim 15, these limitations do nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, see MPEP 2106.05(g).
Further, the “processor,” “first memory,” “second memory” and “a data structure, the data structure including at least an entry” elements of Claims 1 and 8, as well as the “first processor” and “second processor,” elements of Claim 15, these elements are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, see MPEP 2106.05(f). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, thus failing to integrate the abstract idea into a practical application.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements which recite the “receiving a data access request for a data from an application running on a processor,” “accessing the data from the location, wherein the location includes a first memory or a second memory,” of Claim 8, and “receiving a processing request from an application running on a first processor, the processing request to be applied to a data,” and “dispatching the processing request to the target,” of Claim 15, these elements amount to no more than mere instructions to apply the exception using well-known, routine and conventional generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Additionally, the “receiving a data access request” step in Claim 8; and “receiving a processing request” and “dispatching the processing request” steps in Claim 15; these steps constitute “receiving or transmitting data over a network” which the courts have found to be a well-understood, routine, and conventional activity, see MPEP 2106.05(d)(II). Thus, Claims 1, 8 and 15 are not patent eligible under 35 U.S.C.101.
With regard to the individual dependent claims:
Claim 2 recites, “further comprising a library to intercept a data access request from an application running on the processor to access the data.”
Claim 3 recites, “wherein: the data structure includes a scalable interval tree; and the entry includes a node in the scalable interval tree.”
Claim 4 recites, “wherein the entry includes a lock; and the lock is associated with a thread of an application running on the processor, the thread requesting access to the data.”
Claim 6 recites, “further comprising a library to intercept the processing request from the application running on the processor.”
Claim 9 recites, “wherein receiving the data access request for the data from the application running on the processor includes intercepting the data access request for the data from the application running on the processor.”
Claim 10 recites, “wherein intercepting the data access request for the data from the application running on the processor includes intercepting the data access request for the data from the application running on the processor by a library.”
Claim 14 recites, “receiving the data access request for the data from the application running on the processor includes receiving the data access request for the data from a thread of the application running on the processor; and accessing the data from the location includes applying a lock to the entry in the data structure for use by the thread.”
Claim 16 recites, “wherein receiving the processing request from the application running on the first processor includes intercepting the processing request for the data from the application running on the first processor.”
Claim 17 recites, “dispatching the processing request to the target includes dispatching the processing request to the first processor.”
Claim 19 recites, “wherein dispatching the processing request to the target includes dispatching the processing request to the first processor, the second processor, or to both the first processor and the second processor based at least in part on the first estimated time and the second estimated time.”
These limitations of Claims 2-4, 6, 9-10, 14, 16-17 and 19 recite further elements at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components, see MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea and they cannot provide an inventive concept.
Claim 5 recites, “further comprising an analysis engine to calculate a first estimated time for a processing request, from an application running on the processor, on a target data on the processor and a second estimated time for the processing request, from the application running on the processor, on a second processor.”
Claim 7 recites, “...determine that a first part of the target data is stored in the first memory and a second part of the target data is stored in the second memory; calculate a first estimated time for the processor to execute the processing request from the application running on the processor; and calculate a second estimated time for the second processor to execute the processing request from the application running on the processor.”
Claim 11 recites, “identifying the data structure based on the data access request includes identifying the data structure based on the second data access request; identifying the entry in the data structure based on the data access request includes identifying a second entry in the data structure based on the second data access request; and identifying the location storing the data based on the entry in the data structure includes identifying the location storing the second data based on the second entry in the data structure.”
Claim 17 further recites, “determining that the data is stored in a memory associated with of the first processor.”
Claim 18 recites, “determining that a first part of the data is stored in a first memory associated with the first processor and a second part of the data is stored in a second memory associated with the second processor; calculating a first estimated time for the first processor to execute the processing request; and calculating a second estimated time for the second processor to execute the processing request.”
Claim 20 recites, “calculating the first estimated time for the first processor to execute the processing request includes calculating a first transfer time to transfer the second part of the data to the first memory associated with the first processor; and calculating a second estimated time for the second processor to execute the processing request includes calculating a second transfer time to transfer the first part of the data to the second memory associated with the second processor.”
These limitations of Claims 5, 7, 11, 17-18 and 20, as drafted, are processes that, under their broadest reasonable interpretation, recite the abstract idea of a mental process. These limitations encompass a human mind carrying out this function through observation, evaluation judgment and/or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Claim 11 further recites, “receiving the data access request for the data from the application running on the processor includes receiving a second data access request for a second data from the application running on the processor.”
Claim 12 recites, “wherein accessing the data from the location includes accessing the data from the location by the processor.”
Claim 13 recites, “wherein accessing the data from the location includes issuing an input/output (I/O) request to a device, the device including the first memory.”
These limitations of Claims 11-13 do nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, see MPEP 2106.05(g). Additionally, the above limitations recite steps of “receiving or transmitting data over a network” and “storing and retrieving information in memory” which the courts have found to be a well-understood, routine, and conventional activity, see MPEP 2106.05(d)(II).
As such, for the reasons discussed above, dependent Claims 1-7, 9-14 and 16-20 are not patent eligible under 35 U.S.C.101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3, 8 and 11-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by David et al. (US PGPUB 2021/0216569).
With regard to Claim 1, David teaches a system, comprising:
a processor ([0030] “the data storage system management software may execute on a processor of the data storage system 12.”);
a first memory connected to the processor (Fig. 1: Physical Storage Device 16a within Data Storage System 12.);
a second memory connected to the processor (Fig. 1: Physical Storage Device 16b within Data Storage System 12.); and
a data structure, the data structure including at least an entry, the entry identifying that a data is stored in a location, the location including one of the first memory or the second memory ([0043] “Referring to FIG. 2, shown is an example illustrating logical to physical mapping in a data storage system. The example 100 illustrates how the logical address space or range of a LUN 102 is mapped via mapping layer 104 to different slices, segments or more generally, portions of physical memory of non-volatile physical storage devices (110) providing back-end data storage, such as denoted by PDs 16a-n in FIG. 1... Element 102 may denote the LUN's logical address space, having a starting logical address, block or offset of 0, and an ending maximum logical address, MAX.” [0087] “In at least one embodiment, each destination interval represented by a node in the destination interval tree may include the information of each node as illustrated in the FIG. 4 as well as one or more additional fields... the destination node may include the source LUN identifier and the start LBA that may be used to uniquely identify the corresponding source node serving as the Xcopy source for the destination node,” wherein the “destination interval tree” is the “data structure” and further wherein the “start LBA” indicates a location which includes one of the “Physical Storage Devices” 16a or 16b, i.e. in an embodiment consisting of only two “Physical Storage Devices”.).
With regard to Claim 3, David teaches the system according to claim 1, wherein:
the data structure includes a scalable interval tree; and the entry includes a node in the scalable interval tree ([0087] “In at least one embodiment, each destination interval represented by a node in the destination interval tree may include the information of each node as illustrated in the FIG. 4 as well as one or more additional fields... the destination node may include the source LUN identifier and the start LBA that may be used to uniquely identify the corresponding source node serving as the Xcopy source for the destination node.”).
With regard to Claim 8, David teaches a method, comprising:
receiving a data access request for a data from an application running on a processor ([0019] “The processors included in the host systems 14a-14n and data storage system 12 may be any one of a variety of proprietary or commercially available single or multi-processor system.” [0021] “any one of the host computers 14a-14n may issue a data request to the data storage system 12 to perform a data operation. For example, an application executing on one of the host computers 14a-14n may perform a read or write operation resulting in one or more data requests to the data storage system 12.” [0138] “At the step 1002, a read I/O operation is received from the host.”);
identifying a data structure based on the data access request; identifying an entry in the data structure based on the data access request; identifying a location storing the data based on the entry in the data structure ([0009] “processing may include: receiving, from a client, a read I/O operation that reads from a first target location; determining that the first target location overlaps with a second destination interval of a second destination node of the destination interval tree.” [0043] “Referring to FIG. 2, shown is an example illustrating logical to physical mapping in a data storage system. The example 100 illustrates how the logical address space or range of a LUN 102 is mapped via mapping layer 104 to different slices, segments or more generally, portions of physical memory of non-volatile physical storage devices (110) providing back-end data storage, such as denoted by PDs 16a-n in FIG. 1... Element 102 may denote the LUN's logical address space, having a starting logical address, block or offset of 0, and an ending maximum logical address, MAX.” [0044] “Consistent with discussion herein, the data storage system may receive a host I/O that reads or writes data to a target location expressed as a LUN and offset, logical address, track, etc. on the LUN. The target location is a logical LUN address that may map to a physical storage location where data stored at the logical LUN address is stored.”); and
accessing the data from the location, wherein the location includes a first memory or a second memory (Fig. 1: Physical Storage Device 16a/16b within Data Storage System 12. [0042] “the data storage system may include multiple SSD tiers of non-volatile storage where each of the SSD tiers has different characteristics that affect latency when accessing the physical storage media to read or write data.” See Figs. 9a-9b, for example, [0143] “At the step 1018, processing is performed to read data for the target location from a physical storage location based on the location MD for the target location. The data read from the physical storage location is returned as the content of the target location.”).
With regard to Claim 11, this claim is equivalent in scope to Claim 8 rejected above, and as such Claim 11 is rejected under the same grounds and for the same reasons as discussed above with regard to Claim 8.
With further regard to Claim 11, the claim recites additional elements not specifically addressed in the rejection of Claim 8. The David reference also anticipates these additional elements of Claim 11, for example, David teaches:
receiving the data access request for the data from the application running on the processor includes receiving a second data access request for a second data from the application running on the processor ([0021] “For example, an application executing on one of the host computers 14a-14n may perform a read or write operation resulting in one or more data requests to the data storage system 12,” wherein the “second data access request” is a second of the “one or more data requests to the data storage system”.).
With regard to Claim 12, David teaches the method according to claim 8, wherein accessing the data from the location includes accessing the data from the location by the processor ([0019] “The processors included in the host systems 14a-14n and data storage system 12 may be any one of a variety of proprietary or commercially available single or multi-processor system, such as an Intel-based processor, or other type of commercially available processor able to support traffic in accordance with each particular embodiment and application.” [0024] “The data storage array may also include different types of adapters or directors, such as an HA 21 (host adapter), RA 40 (remote adapter), and/or device interface or controller 23. Each of the adapters may be implemented using hardware including a processor with a local memory with code stored thereon for execution in connection with performing different operations.”).
With regard to Claim 13, David teaches the method according to claim 8, wherein accessing the data from the location includes issuing an input/output (I/O) request to a device, the device including the first memory ([0018] “hosts 14a-14n may access the data storage system 12, for example, in performing input/output (I/O) operations or data requests.” [0021] “In the embodiment of the FIG. 1, any one of the host computers 14a-14n may issue a data request to the data storage system 12 to perform a data operation. For example, an application executing on one of the host computers 14a-14n may perform a read or write operation resulting in one or more data requests to the data storage system 12.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over David as applied to Claims 1 and 8 above, and further in view of Wang et al. (US PGPUB 2024/0045804).
With regard to claim 2, David teaches all the limitations of claim 1 as described above. David does not teach the access request intercepting as described in claim 2. Wang teaches
further comprising a library to intercept a data access request from an application running on the processor to access the data ([0074] “the SM libraries 521-523 may intercept requests by the host devices 501-503, respectively, to access pages of the shared memory 530 and communicate with the SM manager 510 to determine whether to grant the access request. More specifically, an SM library may determine whether to grant access to a particular page of the shared memory 530 based on the owner state of the requested page, the host state of the requestor, and the requested access type (such as read or write access).” [0089] “when an application executing on a particular host device requires access to the shared memory 530, the SM library residing on that host device may negotiate with the SM manager 510 to acquire the necessary lock associated with the requested access type (such as read or write access) and map the shared memory 530 with the granted access type.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the system as disclosed by David with the access request intercepting as taught by Wang in order “to maintain cache coherency and access synchronization” (Wang [0089]).
With regard to Claims 9-10, these claims are equivalent in scope to Claim 2 rejected above, merely having a different independent claim type, and as such Claims 9-10 are respectively rejected under the same grounds and for the same reasons as discussed above with regard to Claim 2.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over David as applied to Claims 1 and 8 above, and further in view of Raju et al. (US PGPUB 2020/0192864).
With regard to claim 4, David teaches all the limitations of claim 1 as described above. David does not teach the lock management functionality as described in claim 4. Raju teaches wherein
the entry includes a lock ([0022] “In an aspect, when a lock request for a given resource and range is submitted to a node of the platform, a lock manager associated with that node can determine whether the lock request can be satisfied. For instance, the lock manager can consult an interval tree or other suitable data structure that tracks ranges within the resource in order to determine any existing lock owners with ranges that intersect the requested range, any lock waiters with ranges that intersect the requested range, or the like.”); and
the lock is associated with a thread of an application running on the processor, the thread requesting access to the data ([0034] “the lock initiator component 210 can manage locks for multiple resources, as well as multiple threads of execution that can have locks on different resources,” wherein the “thread of execution” is necessarily associated with “an application running on the processor”.).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the system as disclosed by David with the lock management functionality as taught by Raju so that “Power consumption, processing cycles, and/or other computing resources associated with traversing a data structure, such as a data structure for lock management, can be reduced” (Raju [0024]).
With regard to claim 4, David teaches all the limitations of claim 1 as described above. David does not teach the lock management functionality as described in claim 4. Raju teaches wherein:
receiving the data access request for the data from the application running on the processor includes receiving the data access request for the data from a thread of the application running on the processor ([0034] “the lock initiator component 210 can manage locks for multiple resources, as well as multiple threads of execution that can have locks on different resources,” wherein the “thread of execution” is necessarily associated with “the application running on the processor”.); and
accessing the data from the location includes applying a lock to the entry in the data structure for use by the thread ([0022] “In an aspect, when a lock request for a given resource and range is submitted to a node of the platform, a lock manager associated with that node can determine whether the lock request can be satisfied. For instance, the lock manager can consult an interval tree or other suitable data structure that tracks ranges within the resource in order to determine any existing lock owners with ranges that intersect the requested range, any lock waiters with ranges that intersect the requested range, or the like.” [0033] “If the satisfiability component 220 determines that the requested lock can be granted, e.g., by way of absence of contending locks or lock requests, the satisfiability component 220 can grant the requested lock and add the requester as a lock owner for the requested resource.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the method as disclosed by David with the lock management functionality as taught by Raju so that “Power consumption, processing cycles, and/or other computing resources associated with traversing a data structure, such as a data structure for lock management, can be reduced” (Raju [0024]).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over David as applied to Claim 1 above, and further in view of Lo et al. (US PGPUB 2018/0321980).
With regard to claim 5, David teaches all the limitations of claim 1 as described above. David does not teach the estimating of processing times as described in claim 5. Lo teaches further comprising
an analysis engine to calculate a first estimated time for a processing request, from an application running on the processor, on a target data on the processor and a second estimated time for the processing request, from the application running on the processor, on a second processor ([0048] “the applicability of the disclosed technology and the term ‘computing core’ encompasses, and is not limited to, ... a central processor unit.” [0161] “At 104, execution time of the plurality of program tasks on one or more computing cores is estimated. Each program feature is mapped to an execution time estimate on a selected computing core.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the system as disclosed by David with the estimating of processing times as taught by Lo in order to “intelligently select a computing core in a heterogeneous system to optimize task execution” (Lo [0048]).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over David in view of Lo as applied to Claim 5 above, and further in view of Wang.
With regard to claim 6, David in view of Lo teaches all the limitations of claim 5 as described above. David in view of Lo does not teach the processing request intercepting as described in claim 6. Wang teaches
further comprising a library to intercept the processing request from the application running on the processor ([0074] “the SM libraries 521-523 may intercept requests by the host devices 501-503, respectively, to access pages of the shared memory 530 and communicate with the SM manager 510 to determine whether to grant the access request. More specifically, an SM library may determine whether to grant access to a particular page of the shared memory 530 based on the owner state of the requested page, the host state of the requestor, and the requested access type (such as read or write access),” wherein the “processing request” is a “data access request”. [0089] “when an application executing on a particular host device requires access to the shared memory 530, the SM library residing on that host device may negotiate with the SM manager 510 to acquire the necessary lock associated with the requested access type (such as read or write access) and map the shared memory 530 with the granted access type.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the system as disclosed by David in view of Lo with the processing request intercepting as taught by Wang in order “to maintain cache coherency and access synchronization” (Wang [0089]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over David in view of Lo as applied to Claim 5 above, and further in view of Suarez Garcia et al. (US PGPUB 2017/0060633; hereinafter “Suarez”).
With regard to claim 7, David in view of Lo teaches all the limitations of claim 5 as described above. David in view of Lo does not teach the data location determining as described in claim 7. Suarez teaches wherein the analysis engine is configured to:
determine that a first part of the target data is stored in the first memory and a second part of the target data is stored in the second memory ([0048] “In order to efficiently estimate data transfer costs for transferring data needed for performing the tasks, some embodiment techniques may identify data dependencies of tasks with regard to processing units and/or the locations of data. For example, a scheduler or runtime functionality may identify that a first and second task may both require data of a first buffer stored within a first data storage unit.” [0051] “FIG. 2B is a component diagram 250 illustrating exemplary accesses of buffer data (i.e., buffers 254a, 254b, 254c-254n) by a plurality of processing units 252a-252n in order to execute the plurality of exemplary tasks 201-206 as described with reference to FIG. 2A. In particular, FIG. 2B illustrates that a first task 201, fourth task 204, and sixth task 206 may be assigned to execute on a CPU 252a, requiring data of a first buffer 254a and a fourth buffer 254n...,” wherein Fig. 2B shows the type of information resulting from the data location determining process described in Suarez [0048].).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the system as disclosed by David in view of Lo with the data location determining as taught by Suarez in order “to improve efficiency of task executions with regard to data transfers and thus reduce unnecessary flushing and other drawbacks that may be present in systems lacking coherency” (Suarez [0030]).
With further regard to Claim 7, Lo further teaches wherein the analysis engine is configured to:
calculate a first estimated time for the processor to execute the processing request from the application running on the processor; and calculate a second estimated time for the second processor to execute the processing request from the application running on the processor ([0048] “the applicability of the disclosed technology and the term ‘computing core’ encompasses, and is not limited to, ... a central processor unit.” [0161] “At 104, execution time of the plurality of program tasks on one or more computing cores is estimated. Each program feature is mapped to an execution time estimate on a selected computing core.”).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over David in view of Lo.
With regard to Claim 15, David teaches a method, comprising:
receiving a processing request from an application running on a first processor, the processing request to be applied to a data ([0019] “The processors included in the host systems 14a-14n and data storage system 12 may be any one of a variety of proprietary or commercially available single or multi-processor system.” [0021] “any one of the host computers 14a-14n may issue a data request to the data storage system 12 to perform a data operation. For example, an application executing on one of the host computers 14a-14n may perform a read or write operation resulting in one or more data requests to the data storage system 12,” wherein the “data request... to perform a data operation” is the “processing request”. [0138] “At the step 1002, a read I/O operation is received from the host.”).
With further regard to claim 15, David does not teach the analyzing of a processing request as described in claim 15. Lo teaches further comprising
performing an analysis of the processing request to determine a target to execute the processing request, the target including the first processor or a second processor associated with a device ([0048] “the applicability of the disclosed technology and the term ‘computing core’ encompasses, and is not limited to, ... a central processor unit.” [0161] “At 104, execution time of the plurality of program tasks on one or more computing cores is estimated. Each program feature is mapped to an execution time estimate on a selected computing core.”); and
dispatching the processing request to the target ([0164] “The controller migrates a job that cannot meet its deadline on the little core to a big core based on the predicted execution time.”)
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the system as disclosed by David with the analyzing of a processing request as taught by Lo in order to “intelligently select a computing core in a heterogeneous system to optimize task execution” (Lo [0048]).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over David in view of Lo as applied to Claim 15 above, and further in view of Wang.
With regard to claim 6, David in view of Lo teaches all the limitations of claim 5 as described above. David in view of Lo does not teach the processing request intercepting as described in claim 6. Wang teaches
wherein receiving the processing request from the application running on the first processor includes intercepting the processing request for the data from the application running on the first processor ([0074] “the SM libraries 521-523 may intercept requests by the host devices 501-503, respectively, to access pages of the shared memory 530 and communicate with the SM manager 510 to determine whether to grant the access request. More specifically, an SM library may determine whether to grant access to a particular page of the shared memory 530 based on the owner state of the requested page, the host state of the requestor, and the requested access type (such as read or write access),” wherein the “processing request” is a “data access request”. [0089] “when an application executing on a particular host device requires access to the shared memory 530, the SM library residing on that host device may negotiate with the SM manager 510 to acquire the necessary lock associated with the requested access type (such as read or write access) and map the shared memory 530 with the granted access type.”).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the method as disclosed by David in view of Lo with the processing request intercepting as taught by Wang in order “to maintain cache coherency and access synchronization” (Wang [0089]).
Claims 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over David in view of Lo as applied to Claim 15 above, and further in view of Suarez.
With regard to claim 17, David in view of Lo teaches all the limitations of claim 15 as described above. David in view of Lo does not teach the data location determining as described in claim 17. Suarez teaches wherein:
performing the analysis of the processing request to determine the target to execute the processing request includes determining that the data is stored in a memory associated with of the first processor; and dispatching the processing request to the target includes dispatching the processing request to the first processor ([0048] “In order to efficiently estimate data transfer costs for transferring data needed for performing the tasks, some embodiment techniques may identify data dependencies of tasks with regard to processing units and/or the locations of data. For example, a scheduler or runtime functionality may identify that a first and second task may both require data of a first buffer stored within a first data storage unit.” [0051] “FIG. 2B is a component diagram 250 illustrating exemplary accesses of buffer data (i.e., buffers 254a, 254b, 254c-254n) by a plurality of processing units 252a-252n in order to execute the plurality of exemplary tasks 201-206 as described with reference to FIG. 2A. In particular, FIG. 2B illustrates that a first task 201, fourth task 204, and sixth task 206 may be assigned to execute on a CPU 252a, requiring data of a first buffer 254a and a fourth buffer 254n...,” wherein Fig. 2B shows the type of information resulting from the data location determining process described in Suarez [0048].).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the method as disclosed by David in view of Lo with the data location determining as taught by Suarez in order “to improve efficiency of task executions with regard to data transfers and thus reduce unnecessary flushing and other drawbacks that may be present in systems lacking coherency” (Suarez [0030]).
With regard to claim 18, David in view of Lo teaches all the limitations of claim 15 as described above. David in view of Lo does not teach the data location determining as described in claim 18. Suarez teaches wherein performing the analysis of the processing request to determine the target to execute the processing request includes:
determining that a first part of the data is stored in a first memory associated with the first processor and a second part of the data is stored in a second memory associated with the second processor ([0048] “In order to efficiently estimate data transfer costs for transferring data needed for performing the tasks, some embodiment techniques may identify data dependencies of tasks with regard to processing units and/or the locations of data. For example, a scheduler or runtime functionality may identify that a first and second task may both require data of a first buffer stored within a first data storage unit.” [0051] “FIG. 2B is a component diagram 250 illustrating exemplary accesses of buffer data (i.e., buffers 254a, 254b, 254c-254n) by a plurality of processing units 252a-252n in order to execute the plurality of exemplary tasks 201-206 as described with reference to FIG. 2A. In particular, FIG. 2B illustrates that a first task 201, fourth task 204, and sixth task 206 may be assigned to execute on a CPU 252a, requiring data of a first buffer 254a and a fourth buffer 254n...,” wherein Fig. 2B shows the type of information resulting from the data location determining process described in Suarez [0048].).
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified the method as disclosed by David in view of Lo with the data location determining as taught by Suarez in order “to improve efficiency of task executions with regard to data transfers and thus reduce unnecessary flushing and other drawbacks that may be present in systems lacking coherency” (Suarez [0030]).
With further regard to Claim 18, Lo further teaches wherein performing the analysis of the processing request to determine the target to execute the processing request includes:
calculating a first estimated time for the first processor to execute the processing request; and calculating a second estimated time for the second processor to execute the processing request ([0048] “the applicability of the disclosed technology and the term ‘computing core’ encompasses, and is not limited to, ... a central processor unit.” [0161] “At 104, execution time of the plurality of program tasks on one or more computing cores is estimated. Each program feature is mapped to an execution time estimate on a selected computing core.”).
With regard to claim 19, David in view of Lo and Suarez teaches all the limitations of claim 18 as described above. Lo further teaches
wherein dispatching the processing request to the target includes dispatching the processing request to the first processor, the second processor, or to both the first processor and the second processor based at least in part on the first estimated time and the second estimated time ([0052] “A job is defined as a dynamic instance of a task. As an illustrative example, FIG. 2 shows the concept of tasks, jobs, and deadlines in the software domain, wherein each job has a deadline, or a time budget, which is the time by which it must finish execution.” [0075] “This model is used at the beginning of each job's time budget to estimate the DVFS levels and core types that can meet the deadline for the job. Then, the controller migrate a job and/or adjust the DVFS level in order to meet the deadline with minimal energy consumption. In this embodiment, the controller adjusts a DVFS level on the little core before running Job 1 and Job 2 based on the execution time prediction. For Job 3, the predictor determines that the deadline cannot be met on the little core and migrates the job to the big core.” [0164] “The predicted execution times can also be used in a heterogeneous system to enable task migration from one computing core to another. The controller migrates a job that cannot meet its deadline on the little core to a big core based on the predicted execution time.”).
With regard to claim 20, David in view Lo and Suarez Lo teaches all the limitations of claim 18 as described above. Suarez further teaches wherein:
calculating the first estimated time for the first processor to execute the processing request includes calculating a first transfer time to transfer the second part of the data to the first memory associated with the first processor; and calculating a second estimated time for the second processor to execute the processing request includes calculating a second transfer time to transfer the first part of the data to the second memory associated with the second processor ([0042] “the time required for moving data needed by a task may be determined with an API query that includes only a size of data to transfer, a source identity of a source data storage unit, and a destination identity of a destination data storage unit.” [0048] “data transfer costs (e.g., times, etc.) may indicate costs associated with maintaining cache coherency when multiple tasks are performed in sequence. For example, the cost to transfer data for use by a second task may include not only a data transfer time, but also a time estimate based on the time to complete the use of the data by a first task. [0064] “the multi-processor computing device may estimate or otherwise calculate transfer times and power (energy) consumption for each of the tasks.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is as follows:
Ohta et al. ("Optimization techniques at the I/O forwarding layer," 2010) discusses two optimization techniques at the I/O forwarding layer to further reduce I/O bottlenecks on leadership-class computing systems, including discussion regarding the use of interval trees and collaborative caching.
Ishizaki (US PGPUB 2018/0203785) discloses a method for improving performance of a system including a first processor and a second processor, wherein the method includes calculating a difference in execution time between executing instructions on the first processor and executing the instructions on the second processor.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS J SIMONETTI whose telephone number is (571)270-7702. The examiner can normally be reached Monday-Thursday 10AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan Savla can be reached at (571) 272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS J SIMONETTI/Primary Examiner, Art Unit 2137 January 7, 2026