DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
1. Claims 1, 13, and 20 are currently amended.
2. Claims 1-20 are pending.
3. Claims 1-20 are rejected.
Response to Arguments
4. Regarding Prior Art Rejections:
Applicant’s amendments and arguments to claims 1,13, and 20 have been considered and are not persuasive. The rejections under 35 U.S.C. 103 are maintained. Additionally, applicant’s arguments are rejected under a new ground of rejection necessitated by the amendment.
5. Applicant argues in remarks:
[0004] The Applicant has amended Claim 1 to recite: "wherein each of the plurality of partition configurations allocates one or more processors to a partition and allocates hardware components to be controlled by the one or more processors to the partition during a boot operation, wherein the service processor differs from the one or more processors." The amendment is supported at least by paragraph 47 of the Application.
With the newly amended claims, the overall scope of the claim does not read the same way it did before. Therefore, new art and combination thereof was introduced to better suit the new scope of the claims. See below.
6. Additionally, applicant argues:
[0005] The Office Action states that Graffy teaches "storing, at a service processor, a plurality of partition configurations for a computing system, the plurality of partition configurations allocating hardware resources of the computing system" and cites paragraph 43 as evidence. Office Action at pp. 2-3. The Applicant disagrees.
[0006] Paragraph 43 of Graffy teaches a configuration table stored in a data store 104 that includes respective configuration parameter data associated with function blocks, corresponding block protection units (BPUs), the DMA engine 102, and/or other components of the system. Paragraph 43 of Graffy teaches that during a change of context, the respective resources can access the data store 104 and retrieve the appropriate configuration parameter data from the data store for use in connection with an application being swapped in to access the resources. Thus, the configuration parameter data is merely data is parameters used when executing an application and paragraph 43 makes it clear that this can happen during swapping in and out an application, which one of skill in the art would associate with happing during normal operation.
In paragraph [0039] of Graffy, the resource management component can pre-load respective configuration parameter data into the respective resources at start-up or re-boot of the system or at another desired time. The resource configuration parameters are initiated at a start-up/re-boot of the system. This indicates that there is a booting operation that facilitates a resource configuration to begin.
7. Additionally, applicant argues:
[0007] There is no teaching in Graffy that the configuration parameter data or the configuration table is stored in a service processor, as recited in Claim 1. Note that the Applicant teaches that the service processor is a baseboard management controller (BMC), a Datacenter- ready Secure Control Module (DC-SCM), or an XClarity Controller (XCC) and one of skill in the art would not equate the service processor with any component identified in Graffy. In addition, the configuration parameter data of Graffy is not equivalent to partition configurations that allocate one or more processors and allocated hardware components to be controlled by the one or more processors during a boot operation.
[0008] The Office Action states that Graffy teaches: "associating a configuration schedule with the plurality of partition configurations, wherein each of the plurality of partition configurations is associated with a scheduled time period" and cites paragraphs 37 and 39 as evidence. Office Action at pp. 3-4. The Applicant disagrees.
[0009] While paragraph 37 of Graffy mentions that applications can access the resources based on a schedule, one of skill in the art will recognize that applications function during normal operation of a processor so that the resources are swapped during runtime. One of skill in the art would not equate the resources or configuration parameter data of Graffy that is being scheduled with the partition configurations of amended Claim 1.
[0010] The Office Action states that Graffy teaches "booting the computing system to a particular partition configuration at a particular scheduled time period corresponding to the particular partition configuration" and cites paragraph 39 as evidence. Office Action at pp. 4-5. The Applicant disagrees.
[0011] While paragraph 39 states that the configuration parameter data can be loaded at startup, as mentioned above, the configuration parameter data is not equivalent to partition configurations that allocate processors and associated hardware to various partitions. The Applicant respectfully asserts that Graffy does not read on amended Claim 1 and that amended Claim 1 is allowable.
Examiner respectfully disagrees with applicant’s argument that the configuration parameter data is not equivalent to the partition configuration that allocate processors and associated hardware. Both the configuration parameter data and partition configuration allocate resources, more specifically processors and associated hardware. In Graffy:
[0005] In accordance with a non-limiting, example implementation, a method can comprise storing, by a system comprising a processor, respective sets of configuration data in respective resources of a system-on-chip device, wherein the respective sets of configuration data are associated with respective applications and the respective resources. The method also can comprise controlling, by the system, configuration of and access to the respective resources based at least in part on the respective sets of configuration data associated with respective contexts relating to the respective applications.
[0023] The disclosed subject matter can employ techniques for contextual awareness associated with resources (e.g., hardware resources), including resources of function blocks, of a system (e.g., a system-on-chip or other type of system) to facilitate controlling access to the resources. A resource manager component can pre-load and store a defined number of respective versions of configuration parameter data associated with respective applications in each of the resources of the system.
[0080] The resource management component 400 can comprise a processor component 420 that can work in conjunction with the other components (e.g., communicator component 402, monitor component 404, analyzer component 406, . . . ) to facilitate performing the various functions of the resource management component 400. The processor component 420 can employ one or more processors, microprocessors, or controllers that can process data, such as information relating to resources, respective configuration parameter data associated with respective resources, context of the system, applications, scheduling of access to resources by applications, and/or other information, to facilitate operation of the resource management component 400, as more fully disclosed herein, and control data flow between the resource management component 400 and other components associated with the resource management component 400.
Here, Graffy teaches how the hardware resources, which are associated with a processor component, are allocated based on the configuration parameter data. However, new art was introduced to better suit the overall scope of the amended claim. See below.
8. With the newly amended claims, the overall scope of the claim does not read the same way it did before. Therefore, new art and combination thereof was introduced to better suit the new scope of the claims. In prior art, Ye teaches of booting the computing system to a particular partition configuration by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configurations:
In a first aspect, the present application provides a method of starting a system. A part of hardware resources in a computing device required for the computing device to start an Operating System (OS) is specified in resource configuration information in advance. In the method, a computing device acquires the resource configuration information, initializes the hardware resource specified by the resource configuration information in the computing device, and starts the OS on the initialized hardware resource; In this way, only the minimum hardware resources required to boot the OS in the computing device are initialized according to the resource configuration information; According to the method, when the processor specified by the resource configuration information is initialized, only part of the processor cores in the processor are initialized, so that the time for initializing the processor is reduced. In addition, the processors specified by the initialization resource configuration information may be part of the processors in the computing device, or even only a few processors or one processor in the computing device may be initialized, which reduces the time to initialize the processors of the computing device since it is not necessary to initialize all the processors of the computing device; A manner of initializing memory in a computing device, the resource configuration information specifying memory of a target memory capacity, the target memory capacity being less than a total capacity of memory included by the computing device; for example, the resource allocation information may specify a processor first, and then specify a memory with a target memory capacity in all memory banks connected to the processor specified by the resource allocation information.
However, Ye fails to explicitly teach of a plurality of partition configurations and associating a configuration schedule with the plurality of partition configurations, wherein each of the plurality of partition configurations is associated with a scheduled time period. However, in analogous art, Hu teaches:
[0026] The workload schedulers make requests to the resource scheduler for multiple resource allocations in one or multiple plans, receive resource allocations with much better predictivity derived from the plans, continue using the resource allocations to run different workloads, and release all or fractions of the resource allocations if the resources allocations are no longer needed;
[0045] Resource allocation plans 117 may have multiple levels of allocation attributes. For example, a resource allocation plan 117 may contain a number consumer arrays and/or consumer sets and have two levels of allocation attributes, one at the consumer array/consumer set level and another at the resource allocation plan 117 level. The allocation attributes of allocation specifications, allocation goals, scheduling hints, and/or time constraints may be specified at the consumer array/consumer set level as well as the resource allocation plan 117 level.),
Together, Ye and Hu teach of booting the computing system to a particular partition configuration of a plurality of partition configurations associated with a scheduled time period by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configuration.
9. Additionally, claims 2-12 and 14-19 depend from and further limit amended claims 1, 13, and 20 and are therefore also rejected under 35 U.S.C 103. The full rejection can be found in the 35 U.S.C. 103 rejection section below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. Claims 1, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. CN 108255527 A in view of Hu et al. US 20230050163 A1.
11. With regard to claim 1, Ye teaches:
A method comprising:
storing, at a service processor, a plurality of partition configurations for a computing system, the plurality of partition configurations allocating hardware resources of the computing system, wherein each of the plurality of partition configurations allocates one or more processors to a partition and allocates hardware components to be controlled by the one or more processors to the partition during a boot operation, wherein the service processor differs from the one or more processors (In a first aspect, the present application provides a method of starting a system. A part of hardware resources in a computing device required for the computing device to start an Operating System (OS) is specified in resource configuration information in advance. In the method, a computing device acquires the resource configuration information, initializes the hardware resource specified by the resource configuration information in the computing device, and starts the OS on the initialized hardware resource; In this way, only the minimum hardware resources required to boot the OS in the computing device are initialized according to the resource configuration information; According to the method, when the processor specified by the resource configuration information is initialized, only part of the processor cores in the processor are initialized, so that the time for initializing the processor is reduced. In addition, the processors specified by the initialization resource configuration information may be part of the processors in the computing device, or even only a few processors or one processor in the computing device may be initialized, which reduces the time to initialize the processors of the computing device since it is not necessary to initialize all the processors of the computing device; A manner of initializing memory in a computing device, the resource configuration information specifying memory of a target memory capacity, the target memory capacity being less than a total capacity of memory included by the computing device; for example, the resource allocation information may specify a processor first, and then specify a memory with a target memory capacity in all memory banks connected to the processor specified by the resource allocation information; Examiner’s Note: When starting/booting a system, a resource configuration that is specified in advance is acquired. This causes the hardware resources specified by the resource configuration to be initialized. The hardware resources include a processor(s) specified by the resource configuration and a memory connected to the processor, which is a hardware component controlled by the processor.);
booting the computing system to a particular partition configuration at a particular scheduled time period corresponding to the particular partition configuration (With reference to the first aspect or any one of the possible designs of the first aspect, in this possible design, the resource configuration information specifies a memory with a target memory capacity, where the target memory capacity is smaller than a total capacity of a memory included in the computing device. The computing device initializes the memory with the target memory capacity specified by the resource configuration information in the computing device, and the initialized memory with the target memory capacity is used by the initialized processor; the processor executes the instructions stored in the memory to cause the computing device to perform the method of booting a system provided by the first aspect or various possible designs of the first aspect.).
Although Ye teaches of booting the computing system to a particular partition configuration by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configurations, however, Ye fails to explicitly teach of a plurality of partition configurations and associating a configuration schedule with the plurality of partition configurations, wherein each of the plurality of partition configurations is associated with a scheduled time period.
However, in analogous art, Hu teaches:
a plurality of partition configurations ([0026] The workload schedulers make requests to the resource scheduler for multiple resource allocations in one or multiple plans, receive resource allocations with much better predictivity derived from the plans, continue using the resource allocations to run different workloads, and release all or fractions of the resource allocations if the resources allocations are no longer needed; [0045] Resource allocation plans 117 may have multiple levels of allocation attributes. For example, a resource allocation plan 117 may contain a number consumer arrays and/or consumer sets and have two levels of allocation attributes, one at the consumer array/consumer set level and another at the resource allocation plan 117 level. The allocation attributes of allocation specifications, allocation goals, scheduling hints, and/or time constraints may be specified at the consumer array/consumer set level as well as the resource allocation plan 117 level.),
associating a configuration schedule with the plurality of partition configurations, wherein each of the plurality of partition configurations is associated with a scheduled time period ([0026] The workload schedulers make requests to the resource scheduler for multiple resource allocations in one or multiple plans, receive resource allocations with much better predictivity derived from the plans, continue using the resource allocations to run different workloads, and release all or fractions of the resource allocations if the resources allocations are no longer needed; [0044] Time constraints include preferred times to meet the resource allocation plan 117 and may include the time to meet the minimum total, time to meet the maximum total and time windows to indicate what time windows or ranges may be applied and whether the time window is periodic or one-off, or if the time window may be considered in conjunction with other resource allocation plans 117.),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye with the teachings of Hu of a plurality of partition configurations and associating a configuration schedule with the plurality of partition configurations, wherein each of the plurality of partition configurations is associated with a scheduled time period. Ye teaches of booting the computing system to a particular partition configuration by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configuration. Similarly, Hu teaches of multiple resource allocation plans that have associated time constraints. These indicate preferred times to meet the resource allocation plan. Additionally, Hu teaches benefits of the above-disclosed exemplary implementations include: (1) by having the workload schedulers 115 submit resource allocation plans 117 to the resource scheduler 215, the plans including specific allocation plan attributes for the resources being requested, and having the resource scheduler 215 allocate computing resources to the workload schedulers 115 in accordance with the resource allocation plans 117, the performance and fragmentation problems caused by sporadic, frequent and unplanned interactions between the workload scheduler 115 and the resource scheduler 215 are mitigated; (2) the workload scheduler 115 can make a request to the resource scheduler 215 for multiple resource allocations 120 in one or multiple resource allocation plans 117, receive resource allocations 120 with much better predictivity derived from the resource allocation plans 117, continue using its existing resource allocations 120 to run different workloads 110, and partially release fractions of the resource allocations 120 if they are no longer needed [...] ([0049]). Together, Ye and Hu teach of booting the computing system to a particular partition configuration of a plurality of partition configurations associated with a scheduled time period by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configuration.
12. Regarding claim 13, it is rejected under the same reasoning as claim 1 above. Therefore, it is rejected under the same rationale.
13. Regarding claim 20, it is rejected under the same reasoning as claim 1 above. Therefore, it is rejected under the same rationale.
14. Claims 2 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. CN 108255527 A and Hu et al. US 20230050163 A1, as applied in claim 1, in further view of Lal et al. US 20210117246 A1.
15. With regard to claim 2, Ye and Hu teach the method of claim 1 but fail to explicitly teach further comprising running a real-time clock ("RTC") at the service processor, wherein booting the computing system to the particular partition configuration at the particular scheduled time period is based on an output of the RTC.
However, in analogous art, Lal teaches:
further comprising running a real-time clock ("RTC") at the service processor, wherein booting the computing system to the particular partition configuration at the particular scheduled time period is based on an output of the RTC ([0442] After initial discovery and enumeration, periodic messages from the GPU to the service to keep it up to date about available GPU resources allows the service to allocate the GPU resources to any remote client that requests it; [0586] The trusted time service 4350 can create multiple timers, rooted in RTC 4355 to support monitoring time-based policy for multiple tenants simultaneously; Examiner’s Note: RTC is used to monitor time-based policy, such as resource allocation (partition configuration).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Lal further comprising running a real-time clock ("RTC") at the service processor, wherein booting the computing system to the particular partition configuration at the particular scheduled time period is based on an output of the RTC. As discussed in Lal, the RTC includes the following properties: it is resistant to physical tampering; it persists across FPGA resets; an epoch is associated with it to detect reset or rollover; and enables the trusted time service to read RTC time with integrity. The RTC is set by the CSP securely and is synchronized with CSP's authorizing entity's time ([0586]). Using an RTC provides a reliable way to keep track of time and ensure resources are correctly allocated to clients during their requested times.
16. Regarding claim 14, it is rejected under the same reasoning as claim 2 above. Therefore, it is rejected under the same rationale.
17. Claims 3, 10, 15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. CN 108255527 A and Hu et al. US 20230050163 A1, as applied in claim 1, in further view of Graffy et al. US 20180341518 A1.
18. With regard to claim 3, Ye and Hu teach the method of claim 1 but fail to explicitly teach wherein booting the computing system to the particular partition configuration at the particular scheduled time period comprises shutting down the computing system at a predetermined time prior to a beginning of a next scheduled time period, switching to the particular partition configuration, and re-booting the computing system.
However, in analogous art, Graffy teaches:
wherein booting the computing system to the particular partition configuration at the particular scheduled time period comprises shutting down the computing system at a predetermined time prior to a beginning of a next scheduled time period, switching to the particular partition configuration, and re-booting the computing system ([0039] The resource management component 126 can pre-load or facilitate pre-loading the respective configuration parameter data into the respective resources at start-up or re-boot of the system 100 or at another desired time. The resource management component 126 also can update configuration parameter data in or associated with resources with respect to an application(s) to replace or modify configuration parameter data, for example, when changes are made to configuration parameters due to changes in the application (e.g., application updates) or other enhancements; Examiner’s Note: At start up or reboot, which indicates that the computer was turned off and then back on, the resource management component also can update configuration parameter data in or associated with resources with respect to an application(s) to replace or modify configuration parameter data.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Graffy wherein booting the computing system to the particular partition configuration at the particular scheduled time period comprises shutting down the computing system at a predetermined time prior to a beginning of a next scheduled time period, switching to the particular partition configuration, and re-booting the computing system. Together, Ye and Hu teach of booting the computing system to a particular partition configuration of a plurality of partition configurations associated with a scheduled time period by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configuration. Similarly, Graffy teaches of pre-load or facilitate pre-loading the respective configuration parameter data into the respective resources at start-up or re-boot of the system or at another desired time ([0039]). This helps reflect changes to configuration parameters due to updates or other enhancements, as discussed in Graffy ([0039]).
19. With regard to claim 10, Graffy further teaches:
wherein associating the configuration schedule with the plurality of partition configurations comprises dividing a day into a plurality of timeframes and assigning each of the plurality of timeframes to one of the plurality of partition configurations ([0075] The resource management component 400 can include a scheduler component 414 that generate a schedule to allocate respective time periods to applications to access one or more resources of the system. The schedule can be an RTOS schedule, for example.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Graffy wherein associating the configuration schedule with the plurality of partition configurations comprises dividing a day into a plurality of timeframes and assigning each of the plurality of timeframes to one of the plurality of partition configurations. Together, Ye and Hu teach of booting the computing system to a particular partition configuration of a plurality of partition configurations associated with a scheduled time period by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configuration. Similarly, Graffy teaches of pre-load or facilitate pre-loading the respective configuration parameter data into the respective resources at start-up or re-boot of the system or at another desired time ([0039]). Additionally, Graffy teaches of a scheduler that creates a schedule to allocate respective time periods to an application to access resources of the system. The schedule can be an RTOS schedule. This allows resource allocation to be based at least in part on time, relative priorities of applications with respect to each other, events (e.g., external events) that occur which can result in a particular application(s) being granted access to resources over another application(s) in response to the event, and/or other considerations, as discussed in Graffy ([0075]).
20. Regarding claim 15, it is rejected under the same reasoning as claim 3 above. Therefore, it is rejected under the same rationale.
21. Regarding claim 19, it is rejected under the same reasoning as claim 10 above. Therefore, it is rejected under the same rationale.
22. Claims 4, 9, 16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. CN 108255527 A and Hu et al. US 20230050163 A1, as applied in claim 1, in further view of Liao US 20160299874 A1.
23. With regard to claim 4, Ye and Hu teach the method of claim 3 but fail to teach wherein the computing system comprises a Field Programmable Gate Array ("FPGA") coupled to the service processor, wherein shutting down the computing system at the predetermined time comprises signaling the FPGA to power down an active partition and to reconfigure computing system according to the particular partition configuration.
However, in analogous art, Liao teaches:
wherein the computing system comprises a Field Programmable Gate Array ("FPGA") coupled to the service processor, wherein shutting down the computing system at the predetermined time comprises signaling the FPGA to power down an active partition and to reconfigure computing system according to the particular partition configuration ([0068] As illustrated in FIG. 3, hub ASIC 340 connects with the blade controller 310 by way of a field-programmable gate array (“FPGA”) 342 or similar programmable device for passing signals between integrated circuits.; [0082] The system operator may monitor the health of each partition and take remedial steps when a hardware or software error is detected. The current state of long-running application programs may be saved either periodically or on the command of the system operator or application user to non-volatile storage to guard against losing work in the event of a system or application crash. The system operator or a system user may issue a command to shut down application software. When administratively required, the system operator or an administrative application may entirely shut down a computing partition, reallocate or deallocate computing resources in a partition, or power down the entire HPC system 100; Examiner’s Note: The FPGA sends signals between hardware components. One action could be to shut down a computing partition to reallocate or deallocate computing resources.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Liao wherein the computing system comprises a Field Programmable Gate Array ("FPGA") coupled to the service processor, wherein shutting down the computing system at the predetermined time comprises signaling the FPGA to power down an active partition and to reconfigure computing system according to the particular partition configuration. As Liao discusses, [0053] The field-programmable nature of the FPGA 342 permits the interface between the blade controller 310 and ASIC 340 to be reprogrammable after manufacturing. Thus, the blade controller 310 and ASIC 340 may be designed to have certain generic functions while the FPGA 342 may be used advantageously to program the use of those functions in an application-specific way. Using an FPGA allows the system to be changed when necessary in order to carry out specific tasks, in this case reconfiguring the computer system based on a particular partition configuration.
24. With regard to claim 9, Ye and Hu teach the method of claim 1 but fail to explicitly teach wherein at least one of the plurality of partition configurations comprises a multi-socket configuration.
However, in analogous art, Liao teaches:
wherein at least one of the plurality of partition configurations comprises a multi-socket configuration ([0020] A system consistent with the presently claimed invention may include a plurality of processor sockets, and each of the processor sockets may include one or more CPUs and a memory.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Liao wherein at least one of the plurality of partition configurations comprises a multi-socket configuration. Multi-socket configurations allow for increased CPUS and memory, as discussed in Liao ([0020]). Therefore, allowing for more demanding workloads to be processed.
25. Regarding claim 16, it is rejected under the same reasoning as claim 4 above. Therefore, it is rejected under the same rationale.
26. Regarding claim 18, it is rejected under the same reasoning as claims 9 and 11 above. Therefore, it is rejected under the same rationale.
27. Claims 5, 8, 11, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. CN 108255527 A and Hu et al. US 20230050163 A1, as applied in claim 1, in further view of Roberts et al. US 10164639 B1.
28. With regard to claim 5, Ye and Hu teach the method of claim 1 but fail to explicitly teach wherein at least one of the plurality of partition configurations comprises a multi-partition configuration having multiple partitions, wherein the hardware resources of the computing system are shared among the multiple partitions.
However, in analogous art, Roberts teaches:
wherein at least one of the plurality of partition configurations comprises a multi-partition configuration having multiple partitions, wherein the hardware resources of the computing system are shared among the multiple partitions (Col. 9, lines 29-47, In one embodiment, the macro scheduler 130 allows hardware resources (e.g., macro schedulers) of a single FPGA device to be shared between two different designs from the same or different client devices. In particular, the resource allocation logic 413 allocates macro components from an FPGA device for a first design requested by a first client (e.g., client 110), and then allocates macro components from the same FPGA device for a second design requested by a second client (e.g., client 111). Different macro components of a single FPGA device can thus be shared among multiple designs. In one embodiment, the resource allocation logic 413 may also allocate portions of a single macro component among different designs. For example, two different designs may each use less than half of a memory macro component; accordingly, the resource allocation logic 413 can allocate a single memory macro component to be shared between the two designs, with the first design utilizing an upper portion of the memory and the second design utilizing a lower portion of the memory; Examiner’s Note: The designs are analogous with partition configurations. Each design gets allocated resources. The resources are hardware resources of the computing system that are shared by the designs.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Roberts wherein at least one of the plurality of partition configurations comprises a multi-partition configuration having multiple partitions, wherein the hardware resources of the computing system are shared among the multiple partitions. As discussed in Roberts, two different designs may each use less than half of a memory macro component; accordingly, the resource allocation logic can allocate a single memory macro component to be shared between the two designs, with the first design utilizing an upper portion of the memory and the second design utilizing a lower portion of the memory (Col. 9, lines 29-47). By allowing the designs to share resources, the resources are distributed and used as efficiently as possible.
29. With regard to claim 8, Ye and Hu teach the method of claim 5 but fail to explicitly teach further comprising performing hardware virtualization while the multi-partition configuration is active and/or running a virtual machine on at least one partition while the multi-partition configuration is active.
However, in analogous art, Roberts teaches:
further comprising performing hardware virtualization while the multi-partition configuration is active and/or running a virtual machine on at least one partition while the multi-partition configuration is active (Col. 2, lines 20-29, In one embodiment, a datacenter supports virtualization of its FPGA devices by organizing FPGA hardware resources into logical units called macro components, such that accelerator designs can be specified as macro graphs defining connections between macro components. One or more FPGA macro schedulers for scheduling use of the macro components are integrated in the FPGA devices themselves and/or are operated as standalone units connected to the FPGA devices through a network or system interconnect; Col 11, lines 64 – Col. 12, lines 10, In one embodiment, the macro components are allocated for specific time periods (i.e., scheduled); for example, a macro component may be allocated to a first design during one time period and to a second design during a different time period. For accelerator designs defined by macro graphs having multiple macro components, the resource allocation logic 413 may identify a time period during which all of the macro components specified in the design are available and can be scheduled. In one embodiment, the resource allocation logic 413 optimizes the scheduling of multiple designs from the same client or multiple different clients to maximize usage of the macro components over time; Examiner’s Note: FPGA devices can be virtualized. The allocation of resources to devices is multi-partition configuration.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Roberts further comprising performing hardware virtualization while the multi-partition configuration is active and/or running a virtual machine on at least one partition while the multi-partition configuration is active. By implementing FPGA virtualization, the FPGA can be shared among multiple clients in order to share its hardware resources, as discussed in Roberts. The standalone macro scheduler 130 has access to requests from multiple clients (e.g., 110 and 111) and tracks the resources of multiple FPGA devices (e.g., 121-123), and can therefore identify a greater number of optimal placements and schedules. (Col. 3, lines 31-35).
30. With regard to claim 11, Ye and Hu teach the method of claim 1 but fail to explicitly teach wherein storing the plurality of partition configurations comprises storing a computer image associated with each of the plurality of partition configurations.
However, in analogous art, Roberts teaches:
wherein storing the plurality of partition configurations comprises storing a computer image associated with each of the plurality of partition configurations (Col. 3, lines 49-55, The clients 110 and 111 provide design definitions, task definitions, and other information (e.g., configuration bitfiles) to an API in the standalone macro scheduler 130, or in the local macro schedulers 131-133. The macro schedulers 130-133 allocate hardware resources of the FPGAs 121-123 and schedule task execution in response to the clients' requests; Col. 4, lines 1-14, The macro schedulers 130-133 in the computing system 100 perform the functions of allocating hardware resources of the FPGAs 121-123 for implementing the requested accelerator configurations and scheduling the requested tasks for execution in the accelerators. The macro schedulers 130-133 also perform context switching to allow switching between tasks and configurations (i.e., bitstream swapping). For example, a context switch may entail saving the register and memory state for a configured region (e.g., including a set of configured macro components), restoring a previously saved state to the same region, and reconfiguring the region for executing a different task. The previously saved state can then be restored to resume execution of the original task at a later time; Examiner’s Note: Bitfiles contain configuration information. The register and memory state for a configured region can be saved, which essentially produces an image that allows the previously saved state to be restored.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Roberts wherein storing the plurality of partition configurations comprises storing a computer image associated with each of the plurality of partition configurations. By saving the register and memory state for a configured region, or creating an image, the previously saved state can be restored, as discussed in Roberts (Col. 4, lines 1-14). Therefore, allowing the system to be able to restore its previous state if there was a failure.
31. Regarding claim 17, it is rejected under the same reasoning as claims 5 and 6 above. Therefore, it is rejected under the same rationale.
32. Regarding claim 18, it is rejected under the same reasoning as claims 9 and 11 above. Therefore, it is rejected under the same rationale.
33. Claims 6-7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. CN 108255527 A and Hu et al. US 20230050163 A1, as applied in claim 1, in further view of Roberts et al. US 10164639 B1, as applied in claim 6, in further view of Landis et al. US 20070028244 A1.
34. With regard to claim 6, Roberts teaches the method of claim 5 but fails to teach wherein each of the multiple partitions comprises a processor executing an instance of an operating system.
However, in analogous art, Landis teaches:
wherein each of the multiple partitions comprises a processor executing an instance of an operating system ([0001] The invention relates to computer system para-virtualization using a hypervisor that is implemented in a distinct logical or virtual partition of the host system so as to manage multiple operating systems running in other distinct logical or virtual partitions of the host system. The hypervisor implements a partition policy and resource services that provide for more or less automatic operation of the virtual partitions in a relatively failsafe manner; [0002] Computer system virtualization allows multiple operating systems and processes to share the hardware resources of a host computer; [0013] Similarly, while each command partition system on each node may automatically reallocate resources to the resource database lists of different ultravisor resources on the same multi-processor node in the event of the failure of one or more processors of that node, the controlling operations partitions in a virtual data center implementation may further automatically reallocate resources across multiple nodes in the event of a node failure; Claim 5, The virtualization system of claim 4, wherein a system partition that experiences a processing failure is recovered by rebooting said failed system partition, reassigning system resources preserved for the failed system partition to the rebooted system partition and rolling back any pending transactions in progress by said failed partition to reinstate a status of the resource database entries to a status prior to the time of failure of said system partition; Examiner’s Note: Sharing of resources and reassigning system resources is analogous with multiple partitions. There are multiple operating systems running.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye, Hu, and Roberts with the teachings of Landis wherein each of the multiple partitions comprises a processor executing an instance of an operating system. Computer system virtualization allows multiple operating systems and processes to share the hardware resources of a host computer. This ensures that each operating system does not realize that it is sharing resources with another operating system and does not adversely affect the execution of the other operating system. Such system virtualization enables applications including server consolidation, co-located hosting facilities, distributed web services, applications mobility, secure computing platforms, and other applications that provide for efficient use of underlying hardware resources, as discussed in Landis ([0002]).
35. With regard to claim 7, Landis further teaches:
further comprising running different operating systems on at least two of the multiple partitions ([0001] The invention relates to computer system para-virtualization using a hypervisor that is implemented in a distinct logical or virtual partition of the host system so as to manage multiple operating systems running in other distinct logical or virtual partitions of the host system. The hypervisor implements a partition policy and resource services that provide for more or less automatic operation of the virtual partitions in a relatively failsafe manner; [0002] Computer system virtualization allows multiple operating systems and processes to share the hardware resources of a host computer; [0405] In an exemplary implementation of the system of FIGS. 1 and 2, the ultravisor application and hypervisor system call interface software is loaded on a host system 10 to manage multiple operating systems running in logical or virtual partitions of an ES7000 host system. Several such host systems 10 may be interconnected as virtual data centers through expansion of the ultravisor management capability across nodes. The goal of the ultravisor system as described herein is to provide a flexible repartitioning of the available hardware resources into many isolated virtual systems; Examiner’s Note: Each OS is using different resource allocations (partitions), which indicates that different OS are running on at least two of the multiple partitions.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Landis further comprising running different operating systems on at least two of the multiple partitions. This allows multiple operating systems to share hardware resources of a single host computer. This ensures that each operating system does not realize that it is sharing resources with another operating system and does not adversely affect the execution of the other operating system. Such system virtualization enables applications including server consolidation, co-located hosting facilities, distributed web services, applications mobility, secure computing platforms, and other applications that provide for efficient use of underlying hardware resources, as discussed in Landis ([0002]).
36. Regarding claim 17, it is rejected under the same reasoning as claims 5 and 6 above. Therefore, it is rejected under the same rationale.
37. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. CN 108255527 A and Hu et al. US 20230050163 A1, as applied in claim 1, in further view of Armes US 20190339754 A1.
38. With regard to claim 12, Ye and Hu teach the method of claim 1 but fail to explicitly teach wherein the service processor comprises a datacenter- ready secure control module ("DC-SCM") or a baseboard management controller ("BMC").
However, in analogous art, Armes US 20190339754 A1 teaches:
wherein the service processor comprises a datacenter- ready secure control module ("DC-SCM") or a baseboard management controller ("BMC") (0019] In the example shown in FIG. 2, the machine-readable storage medium 250 may be a memory resource that stores instructions that when executed cause a processing resource, such as processor 270 to implement a system with base management controller interfaces. The instructions include interface instructions 260, such as instructions 262, 264, 266.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ye and Hu with the teachings of Liao wherein the service processor comprises a datacenter- ready secure control module ("DC-SCM") or a baseboard management controller ("BMC"). Together, Ye and Hu teach of booting the computing system to a particular partition configuration of a plurality of partition configurations associated with a scheduled time period by allocating one or more processors to a partition, allocating hardware components to be controlled by the processors, wherein a service processor stores the partition configuration. Similarly, Armes teaches of a base management controller (BMC) that causes a processing resource, such as processor to implement a system with BMC interfaces ([0019]). Using a BMC in association with a processor helps regulate the computing system/device. The base management controller interface includes a power monitoring interface, a temperature interface, and a flow control interface. The power monitoring interface is connected to a management software to distribute and monitor additional power to a host server. The temperature interface to monitor a temperature of the host server. The flow control interface to control a flow rate of liquid in a liquid cooling manifold, as discussed in Armes ([0010]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
ny inquiry concerning this communication or earlier communications from the examiner should be directed to AN-AN N NGUYEN whose telephone number is (571)272-6147. The examiner can normally be reached Monday-Friday 8:00-5:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AIMEE LI can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AN-AN NGOC NGUYEN/Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195