20Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to claims filed 12/27/2022. Claims 1-20 are pending.
Priority
Applicant’s claim for priority from foreign application no. IN202241064894 filed 11/12/2022 is acknowledged.
Claim Objections
Claims 4-10 and 14-18 are objected to because of the following informalities:
Regarding Claim 4, Line 2 states “number of tiles per MPT”. This should state “number of tiles per MTP” instead, as MTP is used for an acronym for Multi-Tile Processor throughout the claims, and is being interpreted as such for examination.
Regarding Claim 14, Line 1 states “The computer-readable storage medium of Claim 13.” This should state “The non-transitory computer-readable storage medium of Claim 13” instead as stated in that Claim, and is being interpreted as such for examination.
Regarding Claim 15, Line 1 states “The computer-readable storage medium of Claim 13.” This should state “The non-transitory computer-readable storage medium of Claim 13” instead as stated in that Claim, and is being interpreted as such for examination.
Regarding Claim 16, Line 1 states “The computer-readable storage medium of Claim 13.” This should state “The non-transitory computer-readable storage medium of Claim 13” instead as stated in that Claim, and is being interpreted as such for examination.
Regarding Claim 17, Line 1 states “The computer-readable storage medium claim 16.” This should state “The non-transitory computer-readable storage medium of Claim 16” instead as stated in that Claim, and is being interpreted as such for examination.
Regarding Claim 18, Line 1 states “The computer-readable storage medium of Claim 17.” This should state “The non-transitory computer-readable storage medium of Claim 17” instead as stated in that Claim, and is being interpreted as such for examination.
Any claim not specifically mentioned above is objected to due to being dependent on objected claims.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 6, it recites the limitation “The apparatus of claim 5, wherein the wear-and-tear includes” on Line 1. This limitation is unclear, as Claim 5, on which it is dependent, states “wear-and-tear per MTP, wear-and-tear per tile, wear-and-tear per core”, but Claim 6 merely states “wear-and-tear”; therefore, it is unclear which one is being referenced in this Claim. For examination, the wear-and-tear is being interpreted as including the information based on the same criteria across the MTP, tile, or core.
Regarding Claim 17, it recites the limitation “The computer-readable storage medium claim 16, wherein the wear-and-tear includes” on Line 1. This limitation is unclear, as Claim 16, on which it is dependent, states “wear-and-tear per processor, wear-and-tear per core”, but Claim 17 merely states “wear-and-tear”; therefore, it is unclear which one is being referenced in this Claim. For examination, the wear-and-tear is being interpreted as including the information based on the same criteria across the processor or core.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pace et al. (US 20030051066 A1) in view of Wilkinson et al. (US 20200210365 A1), hereinafter referred to as Pace and Wilkinson, respectively.
Regarding Claim 1, Pace discloses An apparatus of a computing node of a computing network, the apparatus including: an input and an output; and a processing circuitry coupled to the input and to the output, the processing circuitry to ( [0074] An exemplary method and/or exemplary embodiment of the present invention distributes an asset to a multi-tiered network node. An asset may represent network and/or application components (e.g., data, objects, applications, program modules, etc.) that may be distributed among the various resources of the network. In an embodiment, a pending notice is received from a distribution server. Please note that an embodiment in which an asset is distributed to a network node, where the asset may be distributed among the various resources of the network, corresponds to Applicant’s apparatus of a computing node of a computing network, including an input (to receive the asset) and an output (to distribute the asset to the resources), and processing circuitry coupled to the input and to the output, because, as it is a computing system, it necessarily requires processing circuitry to carry out these operations.):
receive, at the input, a first workload (WL) package including a WL ([0069] an arrangement configured to receive at least one package from at least one enterprise information system (EIS), the packages being subparts of at least one application program, the packages having at least one asset. Please note that an arrangement receiving a package having an asset corresponds to Applicant’s receiving a WL package including a WL at the input.);
determine a first computing resource (CR) metadata corresponding to the WL ([0074] If the notice indicates that at least one asset is pending (i.e., awaiting deployment), an asset descriptor manifest is received from the distribution server. Please note that receiving an asset descriptor manifest for the asset corresponds to Applicant’s determining a first CR metadata corresponding to the WL.);
recompose the first WL package into a second WL package, the second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based at least in part on CR information regarding a server architecture onto which the WL is to be deployed ([0065] recomposing these application programs so that they can be executed on any given platform.; [0073] In an embodiment, a mapping layer can be generated for assets that have run-time dependencies; the mapping layer uses a distribution system to bridge the execution context of a first environment with that of a second environment. The asset executing in the first environment is able to access another resource located in the second environment, even though the asset does not have local access to the resource in the second environment.; [0181] Another exemplary embodiment and/or exemplary method of the present invention is directed to the extended environment data structure, in which the metadata descriptors provide information to describe any or more of: repository object definitions, scope object definitions, module object definitions, operation object definitions, exception object definitions, constant object definitions, properties object definitions, attribute object definitions, relationship object definitions, type object definitions, and other well known metadata object definitions. Please note that recomposing application programs so they can be executed on any given platform by generating a mapping layer for assets that have run-time dependencies to bridge the execution context of a first environment with that of a second corresponds to Applicant’s recomposing the first WL package into a second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based on CR information regarding a server architecture onto which the WL is to be deployed, the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed. Since there is an extended environment data structure including metadata descriptors, this corresponds to the metadata for each particular environment such as that to which the workload is recomposed to operate on, and would include CR information regarding the server architecture onto which it is to be deployed, such as within the operation object definitions.);
and send, from the output, the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon ([0074] The asset descriptor manifest identifies at least one asset to be deployed to the node, and includes an offset associated with the asset identifier […] the entire asset is deployed to the node. Please note that the asset being deployed to the node corresponds to Applicant’s sending the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon from the output, since it is now recomposed and able to be deployed.).
Pace does not explicitly disclose the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed
However, Wilkinson discloses the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed ([0071] some or all of the processor chips 2 may comprise a respective array of tiles 4; [0087] the tile 4 in question begins transmitting data packets over the external interconnect 72 each indicating a destination tile 4 in a header of the packet. Please note that the data packet indicating a destination tile 4 for processing, where a tile is a component of a processor chip, corresponds to Applicant’s second CR metadata further indicating processors of the server architecture onto which the WL is to be deployed).
Pace and Wilkinson are both considered to be analogous to the claimed invention because they are in the same field of computer data exchange between different systems for completing a process. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Pace to incorporate the teachings of Wilkinson to modify the WL package recomposition system with differing first and second CR metadata indicating different server architectures and deploying the second WL package to have the second CR metadata indicate the processors of the server architecture onto which the WL is to be deployed, allowing for improved dispatching of processing and improved system performance through concurrency/parallelism, as described in Wilkinson.
Regarding Claim 2, Pace-Wilkinson as described in Claim 1, Wilkinson further discloses wherein the CR information includes information on individual ones of the one or more processors, and on individual ones of interconnects between the one or more processors ([0040] a processing system comprising an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules, each processor module comprising a respective execution unit for executing a program and respective memory for storing the program and data operated on by the program. Please note that the processing system having an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules corresponds to Applicant’s CR information including information on individual ones of the processors and interconnects between them, as the system must necessarily contain the information for the processors and their respective interconnects as part of its operation.).
Regarding Claim 3, Pace-Wilkinson as described in Claim 1, Wilkinson further discloses the one or more processors include a plurality of multi-tile processors (MTPs), individual ones of the MTPs including a plurality of tiles, individual ones of the tiles including one or more cores and one or more memory circuitries coupled to the one or more cores ([0007] a processor comprising an arrangement of multiple tiles on the same chip (i.e. same die), each tile comprising its own separate respective processing unit and memory (including program memory and data memory). Please note that the processor comprising an arrangement of multiple tiles where each individual tile comprises its own separate respective processing unit and memory corresponds to Applicant’s multi-tile processors where individual ones include a plurality of tiles, each tile including one or more cores, i.e., processing units, and memory circuitries coupled to the cores.);
and the CR information includes information regarding at least one of individual ones of the one or more tiles or individual ones of the one or more cores of said individual ones of the tiles ([0027] I.e. in embodiments there may be provided a processing system comprising: an arrangement of multiple chips each comprising multiple tiles, each tile comprising a respective execution unit for executing a program, and respective memory for storing the program and data operated on by the program. Please note that the processing system having multiple chips each comprising multiple tiles each comprising a respective execution unit corresponds to Applicant’s CR information including information regarding individual ones of the one or more tiles, as the system must necessarily contain information regarding each individual tile in order to utilize them for processing. As Applicant states the CR information including “one or more of” the information, this is interpreted as fulfilling the requirements of the limitation.).
Regarding Claim 4, Pace-Wilkinson as described in Claim 3, Wilkinson further discloses wherein the CR information includes at least one of number of MTPs, number of tiles per MPT, number of cores per tile, memory size per MTP, memory size per tile, memory size per core, MTP clock speed, tile clock speed, core clock speed, number of memory controllers per MTP, number of memory controllers per tile, number of memory controllers per core, shared memory size between MTPs, shared memory size between tiles, shared memory size between cores, number of channels per memory controller, interconnect communication bandwidth between MTPs, interconnect communication bandwidth between tiles, interconnect communication bandwidth between cores, interconnect communication latency between MTPs, interconnect communication latency between tiles, interconnect communication latency between cores, number of accelerators per MTP, number of accelerators per tile, number of accelerators per core, cryptographic speed per accelerator, compression speed per MTP, compression speed per tile, compression speed per core, decompression speed per MTP, decompression speed per tile, decompression speed per core, or capability regarding machine-learning processing ([0027] I.e. in embodiments there may be provided a processing system comprising: an arrangement of multiple chips each comprising multiple tiles. Please note that the system having an arrangement of multiple chips each comprising multiple tiles corresponds to Applicant’s CR information including a number of MTPs and a number of tiles per MTP, as this information is inherently needed as part of the operation of the system. As Applicant states “at least one of” the limitations to be contained within the CR information, this is interpreted as fulfilling the requirement.).
Regarding Claim 5, Pace-Wilkinson as described in Claim 4, Wilkinson further discloses the CR information further includes dynamic CR information, the dynamic CR information including: power consumption per MTP, power consumption per tile, power consumption per core, temperature per MTP, temperature per tile, temperature per core, humidity per MTP, humidity per tile, humidity per core, voltage per MTP, voltage per tile, voltage per core, fan speed per MTP, execution time for a given WL per MTP, execution time for a given WL per tile, execution time for a given WL per core, memory access response time per MTP, memory access response time per tile, memory access response per core, WL deployment response time per MTP, WL deployment response time per tile, WL deployment response time per core, wear-and-tear per MTP, wear-and-tear per tile, wear-and- tear per core, or battery life per MTP ([0017] In alternative embodiments the processor module could instead set the count and then, in software, poll the counter until it hits zero, and then sync. However it would cost some power to do this. The hardware mechanism advantageously implements the disclosed scheme in a more power-efficient manner. Please note that implementing the disclosed scheme in a power-efficient manner could necessitate the system to be dynamically aware of the power consumption of a particular processor in order to continuously implement the scheme in a power-efficient manner, corresponding to Applicant’s CR information further including dynamic CR information including power consumption per MTP. Additionally, since Applicant states the dynamic CR information including the limitations separated by “or,” the examiner interprets this as meaning that one or more of the limitations fulfill the requirements of the claim.).
Regarding Claim 6, Pace-Wilkinson as described in Claim 5, Wilkinson further discloses wherein the wear-and-tear includes information based on at least one of memory bandwidth availability, number of memory misses, number of WLs deployed per time unit, number of hardware errors, percent of maximum compute headroom being used, memory latency, overclocking, transistor aging, voltage spike, temperature spike, core utilization, one or more Reliability, Availability and Serviceability (RAS) indicators, workload key performance indicators (KPIs), power utilization, cache utilization, or hours used ([0165] In embodiments the sync logic 76 in the external interconnect 72 peripheral is configured such that, if this is not the case due to a programming error or other error (such as a memory parity error), then some or all tiles 4 will not receive an acknowledgement, and therefore that the system will come to a halt at the next external barrier, thus allowing a managing external CPU (e.g. the host) to intervene for debug or system recovery. Please note that the configuration of the sync logic registering programming or memory parity errors corresponds to Applicant’s wear-and-tear including information based on number of hardware errors, as the information regarding number of hardware errors is inherently determined as a result of monitoring for errors. As Applicant states “at least one of” the limitations to be contained within the wear-and-tear information, this is interpreted as fulfilling the requirement.).
Regarding Claim 7, Pace-Wilkinson as described in Claim 6, Wilkinson further discloses further including one or more monitoring units to determine the dynamic CR parameters, the processing circuitry to access the dynamic CR parameters from the one or more monitoring units ([0165] All tiles 4 within the mentioned sync zone are programmed to indicate the same sync zone via the mode operand of their respective SYNC instructions. In embodiments the sync logic 76 in the external interconnect 72 peripheral is configured such that, if this is not the case due to a programming error or other error (such as a memory parity error), then some or all tiles 4 will not receive an acknowledgement, and therefore that the system will come to a halt at the next external barrier, thus allowing a managing external CPU (e.g. the host) to intervene for debug or system recovery. Preferably however the compiler is configured to ensure the tiles in the same zone all indicate the same, correct sync zone at the relevant time. Please note that the sync logic 76 monitoring for errors corresponds to Applicant’s monitoring unit to determine the dynamic CR parameters, i.e., the wear-and-tear information based on the number of hardware errors, the processing circuitry to access the dynamic CR parameters from the monitoring unit, since the system is aware of whether errors occur and will halt.).
Regarding Claim 8, Pace-Wilkinson as described in Claim 7, Pace further discloses wherein the processing circuitry is to access a tile fit policy to recompose the first WL package into the second WL package, the tile fit policy to indicate a mapping between respective types of WLs and respective CRs of the server architecture onto which the respective types of WLs are to be deployed ([0065] recomposing these application programs so that they can be executed on any given platform.; [0073] In an embodiment, a mapping layer can be generated for assets that have run-time dependencies; the mapping layer uses a distribution system to bridge the execution context of a first environment with that of a second environment. The asset executing in the first environment is able to access another resource located in the second environment, even though the asset does not have local access to the resource in the second environment. Please note that recomposing application programs so they can be executed on any given platform by generating a mapping layer for assets that have run-time dependencies to bridge the execution context of a first environment with that of a second corresponds to Applicant’s processing circuitry accessing a tile fit policy to recompose the first WL package into the second WL package, the tile fit policy to indicate a mapping between respective types of WLs and respective CRs of the server architecture onto which the respective types of WLs are to be deployed. Since Applicant states in [0057] of the Specification that “may recompose the first WL package into a second WL package based on a tile fit policy (TFP) […] The second WL package may include second CR metadata that is different from any first CR metadata of the first WL package,” indicating the purpose of the TFP is to recompose the WL package so that it goes from fitting one architecture to fitting another. Therefore, in effect, the cited portion of Pace accomplishes the same result, and could be implemented using the tile system of Wilkinson.).
Regarding Claim 9, Pace-Wilkinson as described in Claim 8, Wilkinson further discloses wherein the tile fit policy is based on data from the one or more monitoring units ([0165] All tiles 4 within the mentioned sync zone are programmed to indicate the same sync zone via the mode operand of their respective SYNC instructions. In embodiments the sync logic 76 in the external interconnect 72 peripheral is configured such that, if this is not the case due to a programming error or other error (such as a memory parity error), then some or all tiles 4 will not receive an acknowledgement, and therefore that the system will come to a halt at the next external barrier, thus allowing a managing external CPU (e.g. the host) to intervene for debug or system recovery. Preferably however the compiler is configured to ensure the tiles in the same zone all indicate the same, correct sync zone at the relevant time. Please note that the sync logic 76 monitoring for errors corresponds to Applicant’s monitoring unit data being used as a basis for the tile fit policy, as the mapping between for assets would incorporate the sync logic 76 corresponding to the monitoring that generates data to monitor for errors and ensure correct syncing.)
Pace further discloses and determined based on prior deployments of WLs at the server architecture ([0871] A previous copy of the asset may be compared with the current asset. The difference between these two assets is the delta that will be used to create the delta asset. The resulting delta asset represents the changes that would need to be applied in the target environment that has had all the previous deltas applied to the last frame. Please note that the delta asset representing the changes between a previous copy of the asset and the current asset that represents the changes that would need to be applied in the target environment corresponds to Applicant’s tile fit policy being determined based on prior deployments of WLs at the server architecture, as it considers previously deployed assets in order to adapt the current asset to the target environment, i.e., at the server architecture.).
Regarding Claim 10, Pace-Wilkinson as described in Claim 9, Wilkinson further discloses wherein the data from the one or more monitoring units includes dynamic CR parameters ([0165] All tiles 4 within the mentioned sync zone are programmed to indicate the same sync zone via the mode operand of their respective SYNC instructions. In embodiments the sync logic 76 in the external interconnect 72 peripheral is configured such that, if this is not the case due to a programming error or other error (such as a memory parity error), then some or all tiles 4 will not receive an acknowledgement, and therefore that the system will come to a halt at the next external barrier, thus allowing a managing external CPU (e.g. the host) to intervene for debug or system recovery. Preferably however the compiler is configured to ensure the tiles in the same zone all indicate the same, correct sync zone at the relevant time. Please note that the sync logic 76 monitoring for errors corresponds to Applicant’s monitoring unit data including dynamic CR parameters, i.e., the wear-and-tear information based on the number of hardware errors.).
Regarding Claim 11, Pace discloses A computing node of a computing network, the computing node including: a communication interface to communicate with other computing nodes of the computing network; and a processing circuitry coupled to the communication interface, the processing circuitry to ( [0074] An exemplary method and/or exemplary embodiment of the present invention distributes an asset to a multi-tiered network node. An asset may represent network and/or application components (e.g., data, objects, applications, program modules, etc.) that may be distributed among the various resources of the network. In an embodiment, a pending notice is received from a distribution server. Please note that an embodiment in which an asset is distributed to a network node, where the asset may be distributed among the various resources of the network, corresponds to Applicant’s computing node of a computing network, including a communication interface to communicate with other computing nodes of the computing node (to receive the asset and to distribute the asset to the resources), and processing circuitry coupled to the communication interface, because, as it is a computing system, it necessarily requires processing circuitry to carry out these operations.):
receive, at the input, a first workload (WL) package including a WL ([0069] an arrangement configured to receive at least one package from at least one enterprise information system (EIS), the packages being subparts of at least one application program, the packages having at least one asset. Please note that an arrangement receiving a package having an asset corresponds to Applicant’s receiving a WL package including a WL at the input.);
determine a first computing resource (CR) metadata corresponding to the WL ([0074] If the notice indicates that at least one asset is pending (i.e., awaiting deployment), an asset descriptor manifest is received from the distribution server. Please note that receiving an asset descriptor manifest for the asset corresponds to Applicant’s determining a first CR metadata corresponding to the WL.);
recompose the first WL package into a second WL package, the second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based at least in part on CR information regarding a server architecture onto which the WL is to be deployed ([0065] recomposing these application programs so that they can be executed on any given platform.; [0073] In an embodiment, a mapping layer can be generated for assets that have run-time dependencies; the mapping layer uses a distribution system to bridge the execution context of a first environment with that of a second environment. The asset executing in the first environment is able to access another resource located in the second environment, even though the asset does not have local access to the resource in the second environment.; [0181] Another exemplary embodiment and/or exemplary method of the present invention is directed to the extended environment data structure, in which the metadata descriptors provide information to describe any or more of: repository object definitions, scope object definitions, module object definitions, operation object definitions, exception object definitions, constant object definitions, properties object definitions, attribute object definitions, relationship object definitions, type object definitions, and other well known metadata object definitions. Please note that recomposing application programs so they can be executed on any given platform by generating a mapping layer for assets that have run-time dependencies to bridge the execution context of a first environment with that of a second corresponds to Applicant’s recomposing the first WL package into a second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based on CR information regarding a server architecture onto which the WL is to be deployed, the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed. Since there is an extended environment data structure including metadata descriptors, this corresponds to the metadata for each particular environment such as that to which the workload is recomposed to operate on, and would include CR information regarding the server architecture onto which it is to be deployed, such as within the operation object definitions.);
and send, from the output, the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon ([0074] The asset descriptor manifest identifies at least one asset to be deployed to the node, and includes an offset associated with the asset identifier […] the entire asset is deployed to the node. Please note that the asset being deployed to the node corresponds to Applicant’s sending the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon from the output, since it is now recomposed and able to be deployed.).
Pace does not explicitly disclose the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed
However, Wilkinson discloses the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed ([0071] some or all of the processor chips 2 may comprise a respective array of tiles 4; [0087] the tile 4 in question begins transmitting data packets over the external interconnect 72 each indicating a destination tile 4 in a header of the packet. Please note that the data packet indicating a destination tile 4 for processing, where a tile is a component of a processor chip, corresponds to Applicant’s second CR metadata further indicating processors of the server architecture onto which the WL is to be deployed).
Pace and Wilkinson are both considered to be analogous to the claimed invention because they are in the same field of computer data exchange between different systems for completing a process. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Pace to incorporate the teachings of Wilkinson to modify the WL package recomposition system with differing first and second CR metadata indicating different server architectures and deploying the second WL package to have the second CR metadata indicate the processors of the server architecture onto which the WL is to be deployed, allowing for improved dispatching of processing and improved system performance through concurrency/parallelism, as described in Wilkinson.
Regarding Claim 12, Pace-Wilkinson as described in Claim 11, Wilkinson further discloses wherein the CR information includes information on individual ones of the one or more processors, and on individual ones of interconnects between the one or more processors ([0040] a processing system comprising an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules, each processor module comprising a respective execution unit for executing a program and respective memory for storing the program and data operated on by the program. Please note that the processing system having an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules corresponds to Applicant’s CR information including information on individual ones of the processors and interconnects between them, as the system must necessarily contain the information for the processors and their respective interconnects as part of its operation.).
Regarding Claim 13, Pace discloses A non-transitory computer-readable storage medium comprising instructions stored thereon that, when executed by one or more processors of a data center, cause the one or more processors to perform operations including ([0070] In an exemplary embodiment of a computer memory storage device storing a computer program according to the present invention, the computer program includes the steps of: Please note that the computer memory storage device storing a computer program corresponds to Applicant’s non-transitory computer-readable storage medium comprising instructions stored thereon that cause processors of a data center to perform operations when executed, as it would be obvious to execute the program via processors to one of ordinary skill in the art.):
receiving a first workload (WL) package including a WL ([0069] an arrangement configured to receive at least one package from at least one enterprise information system (EIS), the packages being subparts of at least one application program, the packages having at least one asset. Please note that an arrangement receiving a package having an asset corresponds to Applicant’s receiving a WL package including a WL.);
determining a first computing resource (CR) metadata corresponding to the WL ([0074] If the notice indicates that at least one asset is pending (i.e., awaiting deployment), an asset descriptor manifest is received from the distribution server. Please note that receiving an asset descriptor manifest for the asset corresponds to Applicant’s determining a first CR metadata corresponding to the WL.);
recomposing the first WL package into a second WL package, the second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based at least in part on CR information regarding a server architecture onto which the WL is to be deployed ([0065] recomposing these application programs so that they can be executed on any given platform.; [0073] In an embodiment, a mapping layer can be generated for assets that have run-time dependencies; the mapping layer uses a distribution system to bridge the execution context of a first environment with that of a second environment. The asset executing in the first environment is able to access another resource located in the second environment, even though the asset does not have local access to the resource in the second environment.; [0181] Another exemplary embodiment and/or exemplary method of the present invention is directed to the extended environment data structure, in which the metadata descriptors provide information to describe any or more of: repository object definitions, scope object definitions, module object definitions, operation object definitions, exception object definitions, constant object definitions, properties object definitions, attribute object definitions, relationship object definitions, type object definitions, and other well known metadata object definitions. Please note that recomposing application programs so they can be executed on any given platform by generating a mapping layer for assets that have run-time dependencies to bridge the execution context of a first environment with that of a second corresponds to Applicant’s recomposing the first WL package into a second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based on CR information regarding a server architecture onto which the WL is to be deployed, the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed. Since there is an extended environment data structure including metadata descriptors, this corresponds to the metadata for each particular environment such as that to which the workload is recomposed to operate on, and would include CR information regarding the server architecture onto which it is to be deployed, such as within the operation object definitions.);
and sending the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon ([0074] The asset descriptor manifest identifies at least one asset to be deployed to the node, and includes an offset associated with the asset identifier […] the entire asset is deployed to the node. Please note that the asset being deployed to the node corresponds to Applicant’s sending the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon, since it is now recomposed and able to be deployed.).
Pace does not explicitly disclose the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed
However, Wilkinson discloses the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed ([0071] some or all of the processor chips 2 may comprise a respective array of tiles 4; [0087] the tile 4 in question begins transmitting data packets over the external interconnect 72 each indicating a destination tile 4 in a header of the packet. Please note that the data packet indicating a destination tile 4 for processing, where a tile is a component of a processor chip, corresponds to Applicant’s second CR metadata further indicating processors of the server architecture onto which the WL is to be deployed).
Pace and Wilkinson are both considered to be analogous to the claimed invention because they are in the same field of computer data exchange between different systems for completing a process. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Pace to incorporate the teachings of Wilkinson to modify the WL package recomposition system with differing first and second CR metadata indicating different server architectures and deploying the second WL package to have the second CR metadata indicate the processors of the server architecture onto which the WL is to be deployed, allowing for improved dispatching of processing and improved system performance through concurrency/parallelism, as described in Wilkinson.
Regarding Claim 14, Pace-Wilkinson as described in Claim 13, Wilkinson further discloses wherein the CR information includes information on individual ones of the one or more processors, and on individual ones of interconnects between the one or more processors ([0040] a processing system comprising an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules, each processor module comprising a respective execution unit for executing a program and respective memory for storing the program and data operated on by the program. Please note that the processing system having an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules corresponds to Applicant’s CR information including information on individual ones of the processors and interconnects between them, as the system must necessarily contain the information for the processors and their respective interconnects as part of its operation.).
Regarding Claim 15, Pace-Wilkinson as described in Claim 13, Wilkinson further discloses wherein the CR information includes at least one of number of processors, number of cores per processor, memory size per processor, memory size per core, processor clock speed, core clock speed, number of memory controllers per processor, number of memory controllers per core, shared memory size between processors, shared memory size between cores, number of channels per memory controller, interconnect bandwidth between processors, interconnect communication latency between processors, number of accelerators per processor, number of accelerators per core, cryptographic speed per accelerator, compression speed per processor, compression speed per core, decompression speed per processor, decompression speed per core, or capability regarding machine-learning processing ([0027] I.e. in embodiments there may be provided a processing system comprising: an arrangement of multiple chips each comprising multiple tiles. Please note that the system having an arrangement of multiple chips corresponds to Applicant’s CR information including a number of processors, as this information is inherently needed as part of the operation of the system. As Applicant states “at least one of” the limitations to be contained within the CR information, this is interpreted as fulfilling the requirement.).
Regarding Claim 16, Pace-Wilkinson as described in Claim 13, Wilkinson further discloses wherein the CR information includes dynamic CR information, the dynamic CR information including at least one of: power consumption per processor, power consumption per core, temperature per processor, temperature per core, humidity per processor, humidity per core, voltage per processor, voltage per core, fan speed per processor, execution time for a given WL per processor, execution time for a given WL per core, memory access response time per processor, memory access response time per core, WL deployment response time per processor, WL deployment response time per core, wear-and-tear per processor, wear-and-tear per core, or battery life per processor ([0017] In alternative embodiments the processor module could instead set the count and then, in software, poll the counter until it hits zero, and then sync. However it would cost some power to do this. The hardware mechanism advantageously implements the disclosed scheme in a more power-efficient manner. Please note that implementing the disclosed scheme in a power-efficient manner could necessitate the system to be dynamically aware of the power consumption of a particular processor in order to continuously implement the scheme in a power-efficient manner, corresponding to Applicant’s CR information further including dynamic CR information including power consumption per processor. Additionally, since Applicant states the dynamic CR information including the limitations separated by “or,” the examiner interprets this as meaning that one or more of the limitations fulfill the requirements of the claim.).
Regarding Claim 17, Pace-Wilkinson as described in Claim 16, Wilkinson further discloses wherein the wear-and-tear includes information based on at least one of memory bandwidth availability, number of memory misses, number of WLs deployed per time unit, number of hardware errors, percent of maximum compute headroom being used, memory latency, overclocking, transistor aging, voltage spike, temperature spike, core utilization, one or more Reliability, Availability and Serviceability (RAS) indicators, workload key performance indicators (KPIs), power utilization, cache utilization, or hours used ([0165] In embodiments the sync logic 76 in the external interconnect 72 peripheral is configured such that, if this is not the case due to a programming error or other error (such as a memory parity error), then some or all tiles 4 will not receive an acknowledgement, and therefore that the system will come to a halt at the next external barrier, thus allowing a managing external CPU (e.g. the host) to intervene for debug or system recovery. Please note that the configuration of the sync logic registering programming or memory parity errors corresponds to Applicant’s wear-and-tear including information based on number of hardware errors, as the information regarding number of hardware errors is inherently determined as a result of monitoring for errors. As Applicant states “at least one of” the limitations to be contained within the wear-and-tear information, this is interpreted as fulfilling the requirement.).
Regarding Claim 18, Pace-Wilkinson as described in Claim 17, Pace further discloses the operations further including accessing a CR fit policy to recompose the first WL package into the second WL package, the CR fit policy to indicate a mapping between respective types of WLs and respective CRs of the server architecture onto which the respective types of WLs are to be deployed ([0065] recomposing these application programs so that they can be executed on any given platform.; [0073] In an embodiment, a mapping layer can be generated for assets that have run-time dependencies; the mapping layer uses a distribution system to bridge the execution context of a first environment with that of a second environment. The asset executing in the first environment is able to access another resource located in the second environment, even though the asset does not have local access to the resource in the second environment. Please note that recomposing application programs so they can be executed on any given platform by generating a mapping layer for assets that have run-time dependencies to bridge the execution context of a first environment with that of a second corresponds to Applicant’s accessing a CR fit policy to recompose the first WL package into the second WL package, the CR fit policy to indicate a mapping between respective types of WLs and respective CRs of the server architecture onto which the respective types of WLs are to be deployed. Since Applicant states in [0057] of the Specification that “may recompose the first WL package into a second WL package based on a tile fit policy (TFP) […] The second WL package may include second CR metadata that is different from any first CR metadata of the first WL package,” indicating the purpose of the CR fit policy is to recompose the WL package so that it goes from fitting one architecture to fitting another. Therefore, in effect, the cited portion of Pace accomplishes the same result, and could be implemented using the system of Wilkinson.);
and determined based on prior deployments of WLs at the server architecture ([0871] A previous copy of the asset may be compared with the current asset. The difference between these two assets is the delta that will be used to create the delta asset. The resulting delta asset represents the changes that would need to be applied in the target environment that has had all the previous deltas applied to the last frame. Please note that the delta asset representing the changes between a previous copy of the asset and the current asset that represents the changes that would need to be applied in the target environment corresponds to Applicant’s tile fit policy being determined based on prior deployments of WLs at the server architecture, as it considers previously deployed assets in order to adapt the current asset to the target environment, i.e., at the server architecture.).
Wilkinson further discloses the CR fit policy further based on data from one or more monitoring units ([0165] All tiles 4 within the mentioned sync zone are programmed to indicate the same sync zone via the mode operand of their respective SYNC instructions. In embodiments the sync logic 76 in the external interconnect 72 peripheral is configured such that, if this is not the case due to a programming error or other error (such as a memory parity error), then some or all tiles 4 will not receive an acknowledgement, and therefore that the system will come to a halt at the next external barrier, thus allowing a managing external CPU (e.g. the host) to intervene for debug or system recovery. Preferably however the compiler is configured to ensure the tiles in the same zone all indicate the same, correct sync zone at the relevant time. Please note that the sync logic 76 monitoring for errors corresponds to Applicant’s monitoring unit data being used as a basis for the CR fit policy, as the mapping between for assets would incorporate the sync logic 76 corresponding to the monitoring that generates data to monitor for errors and ensure correct syncing.)
Regarding Claim 19, Pace discloses A method to be performed at a computing node of a computing network, the method comprising ([0074] An exemplary method and/or exemplary embodiment of the present invention distributes an asset to a multi-tiered network node. Please note that the exemplary method that distributes an asset to a network node corresponds to Applicant’s method to be performed at a computing node of a computing network.):
receiving a first workload (WL) package including a WL ([0069] an arrangement configured to receive at least one package from at least one enterprise information system (EIS), the packages being subparts of at least one application program, the packages having at least one asset. Please note that an arrangement receiving a package having an asset corresponds to Applicant’s receiving a WL package including a WL.);
determining a first computing resource (CR) metadata corresponding to the WL ([0074] If the notice indicates that at least one asset is pending (i.e., awaiting deployment), an asset descriptor manifest is received from the distribution server. Please note that receiving an asset descriptor manifest for the asset corresponds to Applicant’s determining a first CR metadata corresponding to the WL.);
recomposing the first WL package into a second WL package, the second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based at least in part on CR information regarding a server architecture onto which the WL is to be deployed ([0065] recomposing these application programs so that they can be executed on any given platform.; [0073] In an embodiment, a mapping layer can be generated for assets that have run-time dependencies; the mapping layer uses a distribution system to bridge the execution context of a first environment with that of a second environment. The asset executing in the first environment is able to access another resource located in the second environment, even though the asset does not have local access to the resource in the second environment.; [0181] Another exemplary embodiment and/or exemplary method of the present invention is directed to the extended environment data structure, in which the metadata descriptors provide information to describe any or more of: repository object definitions, scope object definitions, module object definitions, operation object definitions, exception object definitions, constant object definitions, properties object definitions, attribute object definitions, relationship object definitions, type object definitions, and other well known metadata object definitions. Please note that recomposing application programs so they can be executed on any given platform by generating a mapping layer for assets that have run-time dependencies to bridge the execution context of a first environment with that of a second corresponds to Applicant’s recomposing the first WL package into a second WL package including the WL and second CR metadata different from the first CR metadata, the second CR metadata being based on CR information regarding a server architecture onto which the WL is to be deployed, the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed. Since there is an extended environment data structure including metadata descriptors, this corresponds to the metadata for each particular environment such as that to which the workload is recomposed to operate on, and would include CR information regarding the server architecture onto which it is to be deployed, such as within the operation object definitions.);
and sending the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon ([0074] The asset descriptor manifest identifies at least one asset to be deployed to the node, and includes an offset associated with the asset identifier […] the entire asset is deployed to the node. Please note that the asset being deployed to the node corresponds to Applicant’s sending the second WL package to one or more processors of the server architecture to cause deployment of the WL thereon, since it is now recomposed and able to be deployed.).
Pace does not explicitly disclose the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed
However, Wilkinson discloses the second CR metadata further to indicate one or more processors of the server architecture onto which the WL is to be deployed ([0071] some or all of the processor chips 2 may comprise a respective array of tiles 4; [0087] the tile 4 in question begins transmitting data packets over the external interconnect 72 each indicating a destination tile 4 in a header of the packet. Please note that the data packet indicating a destination tile 4 for processing, where a tile is a component of a processor chip, corresponds to Applicant’s second CR metadata further indicating processors of the server architecture onto which the WL is to be deployed).
Pace and Wilkinson are both considered to be analogous to the claimed invention because they are in the same field of computer data exchange between different systems for completing a process. Therefore, it would have been obvious to someone of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Pace to incorporate the teachings of Wilkinson to modify the WL package recomposition system with differing first and second CR metadata indicating different server architectures and deploying the second WL package to have the second CR metadata indicate the processors of the server architecture onto which the WL is to be deployed, allowing for improved dispatching of processing and improved system performance through concurrency/parallelism, as described in Wilkinson.
Regarding Claim 20, Pace-Wilkinson as described in Claim 19, Wilkinson further discloses wherein the CR information includes information on individual ones of the one or more processors, and on individual ones of interconnects between the one or more processors ([0040] a processing system comprising an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules, each processor module comprising a respective execution unit for executing a program and respective memory for storing the program and data operated on by the program. Please note that the processing system having an arrangement of multiple processor modules and at least a first interconnect for exchanging data between different sets of the processor modules corresponds to Applicant’s CR information including information on individual ones of the processors and interconnects between them, as the system must necessarily contain the information for the processors and their respective interconnects as part of its operation.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wilkinson et al. (US 20190155768 A1) discloses processors comprising an arrangement of multiple tiles on the same chip with separate respective processing units and memory, connected via interconnects, running a program on different tiles, having nodes with inputs and outputs, performing I/O with tiles, and maintaining optimal bandwidth of tiles (see [0007-0009, 0011-0012, 0043]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARAZ T AKBARI whose telephone number is (571)272-4166. The examiner can normally be reached Monday-Thursday 9:30am-7:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FARAZ T AKBARI/ Examiner, Art Unit 2196
/APRIL Y BLAIR/ Supervisory Patent Examiner, Art Unit 2196