Prosecution Insights
Last updated: April 19, 2026
Application No. 18/229,649

Live Migration Method and System Thereof

Final Rejection §103
Filed
Aug 02, 2023
Examiner
TSAI, SHENG JEN
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Wistron Corporation
OA Round
4 (Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
3y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
556 granted / 790 resolved
+15.4% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
25 currently pending
Career history
815
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 790 resolved cases

Office Action

§103
DETAILED ACTION 1. This Office Action is taken in response to Applicants’ Amendments and Remarks filed on 11/5/2025 regarding application 18/229,649 filed on 8/2/2023. Claims 1-20 are pending for consideration. 2. Response to Amendments and Remarks Applicants’ amendments and remarks have been fully and carefully considered, with the Examiner’s response set forth below. (1) In view of Applicant’s amendments and remarks, the 101 rejections for claims 10 and 20 have been withdrawn. (2) In response to the amendments and remarks, an updated claim analysis has been made. Refer to the corresponding sections of the following Office Action for details. 3. Examiner’s Note (1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. (2) Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1-6, 8, 11-16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Piednoel (US Patent Application Publication 2024/0378172), and in view of Kim et al. (US Patent Application Publication 2021/0104291, hereinafter Kim). As to claim 1, Piednoel teaches A live migration method, for a shared memory of a system [as shown in figure 2, where the corresponding “shared memory” comprises the first memory (215) of the first SoC (210), and the second memory (225) of the second SoC (220); FIG. 2 is a block diagram depicting an example computing system 200 implementing a multiple system-on-chip (MSoC), in accordance with examples described herein. In various examples, the computing system 200 can include a first SoC 210 having a first memory 215 and a second SoC 220 having a second memory 225 coupled by an interconnect 240 (e.g., an ASIL-D rated interconnect) that enables each of the first SoC 210 and second SoC 220 to read each other's memories 215, 225. During any given session, the first SoC 210 and the second SoC 220 may alternate roles, between a primary SoC and a backup SoC … For example, if the first SoC 210 is the primary SoC and the second SoC 220 is the backup SoC, then the first SoC 210 performs a set of autonomous driving tasks and publishes state information corresponding to these tasks in the first memory 215. The second SoC 220 reads the published state information in the first memory 215 to continuously check that the first SoC 210 is operating within nominal thresholds (e.g., temperature thresholds, bandwidth and/or memory thresholds, etc.), and that the first SoC 210 is performing the set of autonomous driving tasks properly … In various implementations, the second SoC 220 can publish state information corresponding to its computational components being maintained in a standby state (e.g., a low power state in which the second SoC 220 maintains readiness to take over the set of tasks from the first SoC 210). In such examples, the first SoC 210 can monitor the state information of the second SoC 220 by continuously or periodically reading the memory 225 of the second SoC 220 to also perform health check monitoring and error management on the second SoC 220 … (¶ 0034-0036); Kim also teaches this limitation -- A memory system including a first central processing unit, a first memory module connected to the first central processing unit by a first channel, a second memory module connected to the first central processing unit by a second channel, and a third memory module connected to the first central processing unit by a third channel may be provided. Each of the first memory module, the second memory module, and the third memory module may be configured to write the same data in a data area thereof and a mirroring data area thereof in response to an address in a mirroring mode (abstract)], comprising: receiving a state of a first system-on-chip (SoC), wherein the first SoC is configured to write the state of the first SoC into at least part of a second memory of a second SoC in the shared memory without copying the state [A computing system can include a first system on chip (SoC) and a second SoC. Each SoC can comprise a memory in which the SoC publishes state information. For the first SoC, the state information can correspond to a set of tasks being performed by the first SoC, where the first SoC utilizes a plurality of computational components to perform the set of tasks. The first SoC can directly access the memory of the first SoC to dynamically read the state information published by the first SoC … (abstract); For example, if the first SoC 210 is the primary SoC and the second SoC 220 is the backup SoC, then the first SoC 210 performs a set of autonomous driving tasks and publishes state information corresponding to these tasks in the first memory 215. The second SoC 220 reads the published state information in the first memory 215 to continuously check that the first SoC 210 is operating within nominal thresholds (e.g., temperature thresholds, bandwidth and/or memory thresholds, etc.), and that the first SoC 210 is performing the set of autonomous driving tasks properly … (¶ 0035); FIG. 4 is a block diagram depicting an example central chiplet 402 of an SoC 400 that includes a shared memory 405 for implementing duplicated status and shadowing for multiple SoCs, according to examples described herein … (¶ 0055); FIGS. 5 and 6 are flow charts describing methods of implementing duplicated status and shadowing for multiple SoCs, according to examples described herein … FIG. 6 is a flow chart describing a further method of implementing duplicated status and shadowing for a dual SoC arrangement, in accordance with examples described herein. In the below discussion of FIG. 6, reference may be made to the SoC 300, SoC 400, and SoC 460 as the primary or backup SoC in the dual SoC arrangement … It is contemplated that any multiple-SoC arrangement can be used to implement the duplicated status and shadowing techniques described herein … (¶ 0061-0066); Kim more expressively teaches the aspect of “write the state of the first SoC into at least part of a second memory of a second SoC” – A memory system including a first central processing unit, a first memory module connected to the first central processing unit by a first channel, a second memory module connected to the first central processing unit by a second channel, and a third memory module connected to the first central processing unit by a third channel may be provided. Each of the first memory module, the second memory module, and the third memory module may be configured to write the same data in a data area thereof and a mirroring data area thereof in response to an address in a mirroring mode (abstract); In an example embodiment, the memory system 10 may perform an on-die minoring operation in each of the memory modules 12-1, 12-2, and 12-3 connected to the channels CH1, CH2, and CH3 in the on-die minoring mode. In this case, the on-die minoring operation may include a write operation of simultaneously writing the same data to a first area, (interchangeably referred to as data area (DA) of first memory area) and a second area (interchangeably referred to as mirrored data area (MDA) or second memory area) in response to a single address, and a read operation for outputting data read from any one of a first area DA and a second area MDA in response to any one address (¶ 0046); FIG. 5 is a diagram illustrating a memory 100 according to an example embodiment of the present inventive concepts … For example, the processing circuitry more specifically may include, but is not limited to, a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, application-specific integrated circuit (ASIC), etc (¶ 0053); In an example embodiment, the application processor 3100 may be implemented as a system-on-chip (SoC). A kernel of an operating system running in the system-on-chip (SoC) may include an input/output (I/O) scheduler, and a device driver controlling the storage device 3300. The device driver may control access performance of the storage device 3300 with reference to the number of sync queues managed by the input/output scheduler, or may control a CPU mode, a DVFS level, or the like in the SoC (System-on-Chip) (¶ 0146)]; and storing the state for reading [A computing system can include a first system on chip (SoC) and a second SoC. Each SoC can comprise a memory in which the SoC publishes state information. For the first SoC, the state information can correspond to a set of tasks being performed by the first SoC, where the first SoC utilizes a plurality of computational components to perform the set of tasks. The first SoC can directly access the memory of the first SoC to dynamically read the state information published by the first SoC … (abstract); For example, if the first SoC 210 is the primary SoC and the second SoC 220 is the backup SoC, then the first SoC 210 performs a set of autonomous driving tasks and publishes state information corresponding to these tasks in the first memory 215. The second SoC 220 reads the published state information in the first memory 215 to continuously check that the first SoC 210 is operating within nominal thresholds (e.g., temperature thresholds, bandwidth and/or memory thresholds, etc.), and that the first SoC 210 is performing the set of autonomous driving tasks properly … (¶ 0035)], wherein the second SoC is configured to read the state from the shared memory [… In a backup role, the second SoC maintains a subset of its computational components in a low power state. When the second SoC detects a trigger while reading the state information published in the first memory of the first SoC … (abstract); For example, if the first SoC 210 is the primary SoC and the second SoC 220 is the backup SoC, then the first SoC 210 performs a set of autonomous driving tasks and publishes state information corresponding to these tasks in the first memory 215. The second SoC 220 reads the published state information in the first memory 215 to continuously check that the first SoC 210 is operating within nominal thresholds (e.g., temperature thresholds, bandwidth and/or memory thresholds, etc.), and that the first SoC 210 is performing the set of autonomous driving tasks properly … (¶ 0035)], wherein the shared memory is constructed at least from the at least part of the second memory of the second SoC [as shown in figure 2, where the corresponding “shared memory” comprises the first memory (215) of the first SoC (210), and the second memory (225) of the second SoC (220); FIG. 2 is a block diagram depicting an example computing system 200 implementing a multiple system-on-chip (MSoC), in accordance with examples described herein. In various examples, the computing system 200 can include a first SoC 210 having a first memory 215 and a second SoC 220 having a second memory 225 coupled by an interconnect 240 (e.g., an ASIL-D rated interconnect) that enables each of the first SoC 210 and second SoC 220 to read each other's memories 215, 225. During any given session, the first SoC 210 and the second SoC 220 may alternate roles, between a primary SoC and a backup SoC … For example, if the first SoC 210 is the primary SoC and the second SoC 220 is the backup SoC, then the first SoC 210 performs a set of autonomous driving tasks and publishes state information corresponding to these tasks in the first memory 215. The second SoC 220 reads the published state information in the first memory 215 to continuously check that the first SoC 210 is operating within nominal thresholds (e.g., temperature thresholds, bandwidth and/or memory thresholds, etc.), and that the first SoC 210 is performing the set of autonomous driving tasks properly … In various implementations, the second SoC 220 can publish state information corresponding to its computational components being maintained in a standby state (e.g., a low power state in which the second SoC 220 maintains readiness to take over the set of tasks from the first SoC 210). In such examples, the first SoC 210 can monitor the state information of the second SoC 220 by continuously or periodically reading the memory 225 of the second SoC 220 to also perform health check monitoring and error management on the second SoC 220 … (¶ 0034-0036); Kim also teaches this limitation -- A memory system including a first central processing unit, a first memory module connected to the first central processing unit by a first channel, a second memory module connected to the first central processing unit by a second channel, and a third memory module connected to the first central processing unit by a third channel may be provided. Each of the first memory module, the second memory module, and the third memory module may be configured to write the same data in a data area thereof and a mirroring data area thereof in response to an address in a mirroring mode (abstract)]. Regarding claim 1, Piednoel teaches implementing duplicated status and shadowing for multiple SoCs using a shared memory [FIG. 4 is a block diagram depicting an example central chiplet 402 of an SoC 400 that includes a shared memory 405 for implementing duplicated status and shadowing for multiple SoCs, according to examples described herein … (¶ 0055); FIGS. 5 and 6 are flow charts describing methods of implementing duplicated status and shadowing for multiple SoCs, according to examples described herein … FIG. 6 is a flow chart describing a further method of implementing duplicated status and shadowing for a dual SoC arrangement, in accordance with examples described herein. In the below discussion of FIG. 6, reference may be made to the SoC 300, SoC 400, and SoC 460 as the primary or backup SoC in the dual SoC arrangement … It is contemplated that any multiple-SoC arrangement can be used to implement the duplicated status and shadowing techniques described herein … (¶ 0061-0066)], but does not expressively teach the duplication is performed by writing the same data/state into both the first memory and the second memory concurrently. However. Kim specifically teaches write the same data into both the first and the second memory concurrently in a SoC system [A memory system including a first central processing unit, a first memory module connected to the first central processing unit by a first channel, a second memory module connected to the first central processing unit by a second channel, and a third memory module connected to the first central processing unit by a third channel may be provided. Each of the first memory module, the second memory module, and the third memory module may be configured to write the same data in a data area thereof and a mirroring data area thereof in response to an address in a mirroring mode (abstract); In an example embodiment, the memory system 10 may perform an on-die minoring operation in each of the memory modules 12-1, 12-2, and 12-3 connected to the channels CH1, CH2, and CH3 in the on-die minoring mode. In this case, the on-die minoring operation may include a write operation of simultaneously writing the same data to a first area, (interchangeably referred to as data area (DA) of first memory area) and a second area (interchangeably referred to as mirrored data area (MDA) or second memory area) in response to a single address, and a read operation for outputting data read from any one of a first area DA and a second area MDA in response to any one address (¶ 0046); FIG. 5 is a diagram illustrating a memory 100 according to an example embodiment of the present inventive concepts … For example, the processing circuitry more specifically may include, but is not limited to, a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, application-specific integrated circuit (ASIC), etc (¶ 0053); In an example embodiment, the application processor 3100 may be implemented as a system-on-chip (SoC). A kernel of an operating system running in the system-on-chip (SoC) may include an input/output (I/O) scheduler, and a device driver controlling the storage device 3300. The device driver may control access performance of the storage device 3300 with reference to the number of sync queues managed by the input/output scheduler, or may control a CPU mode, a DVFS level, or the like in the SoC (System-on-Chip) (¶ 0146)]. Therefore, it would have been obvious for one of ordinary skills in the art prior to Applicant’s invention to write the same data into both the first and the second memory concurrently in a SoC system, as specifically demonstrated by Kim, and to incorporate it into the existing scheme disclosed by Piednoel, because Kim teaches data mirroring ensures that data will still be available even if one memory module fails, enhance improves reliability of the system [FIG. 8A is a diagram illustrating a read failure process in an on-die mirroring mode according to an example embodiment of the present inventive concepts, and FIG. 8B is a diagram illustrating a read retry process in an on-die mirroring mode according to an example embodiment of the present inventive concepts (¶ 0018)]. As to claim 2, Piednoel in view of Kim teaches The live migration method of claim 1, wherein the first SoC and the second SoC are disposed in different servers, chassis, or racks [Piednoel -- as shown in figure 4, where the first SoC (400) and the second SoC (460) are located at different places]. As to claim 3, Piednoel in view of Kim teaches The live migration method of claim 1, wherein the first SoC or the second SoC does not use a network to copy or transmit the state [Piednoel -- as shown in figure 2, where the first SoC (210) and the second SoC (220) communicate via internal link without using a network; For example, if the first SoC 210 is the primary SoC and the second SoC 220 is the backup SoC, then the first SoC 210 performs a set of autonomous driving tasks and publishes state information corresponding to these tasks in the first memory 215. The second SoC 220 reads the published state information in the first memory 215 to continuously check that the first SoC 210 is operating within nominal thresholds (e.g., temperature thresholds, bandwidth and/or memory thresholds, etc.), and that the first SoC 210 is performing the set of autonomous driving tasks properly … (¶ 0035)], and the shared memory is constructed at least from the at least part of the second memory and at least from at least part of a first memory of the first SoC, or at least part of a third memory of the system [Piednoel – shared memory, figure 4, 405]. As to claim 4, Piednoel in view of Kim teaches The live migration method of The live migration method of wherein no virtual machine is installed within the first SoC or the second SoC, and wherein neither virtual machine provisioning nor virtual machine resource allocation is required on the second SoC before the state is resumed on the second SoC [Piednoel -- as shown in figure 2, where the first SoC (210) and the second SoC (220) communicate via internal link without using a virtual machine; For example, if the first SoC 210 is the primary SoC and the second SoC 220 is the backup SoC, then the first SoC 210 performs a set of autonomous driving tasks and publishes state information corresponding to these tasks in the first memory 215. The second SoC 220 reads the published state information in the first memory 215 to continuously check that the first SoC 210 is operating within nominal thresholds (e.g., temperature thresholds, bandwidth and/or memory thresholds, etc.), and that the first SoC 210 is performing the set of autonomous driving tasks properly … (¶ 0035)]. As to claim 5, Piednoel in view of Kim teaches The live migration method of claim 1, wherein the second SoC is configured to read the state from the shared memory to resume the state on the second SoC, and the second SoC is configured to operate according to the state [Piednoel -- as shown in figure 5, steps 500-525; … In a backup role, the second SoC maintains a subset of its computational components in a low power state. When the second SoC detects a trigger while reading the state information published in the first memory of the first SoC, the second SoC powers the subset of computational components to take over the set of tasks (abstract)]. As to claim 6, Piednoel in view of Kim teaches The live migration method of claim 1, wherein the first SoC is configured to utilize at least one central processing unit core of the first SoC to simulate an arithmetic logic unit of a graphics processing unit (GPU), and the first SoC does not include any GPU [Piednoel -- In various examples, the system on chip 300 can further include a machine-learning (ML) accelerator chiplet 340 that is specialized for accelerating AI workloads, such as image inferences or other sensor inferences using machine learning, in order to achieve high performance and low power consumption for these workloads. The ML accelerator chiplet 340 can include an engine designed to efficiently process graph-based data structures, which are commonly used in AI workloads, and a highly parallel processor, allowing for efficient processing of large volumes of data … (¶ 0050)]. As to claim 8, Piednoel in view of Kim teaches The live migration method of claim 1, wherein a computing pool of the system includes a plurality of processors of a plurality of SoCs, and the plurality of SoCs include the first SoC or the second SoC [Piednoel -- as shown in figures 2 and 4, where there are at least two SoCs]. As to claim 11, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details. As to claim 13, it recites substantially the same limitations as in claim 3, and is rejected for the same reasons set forth in the analysis of claim 3. Refer to “As to claim 3” presented earlier in this Office Action for details. As to claim 14, it recites substantially the same limitations as in claim 4, and is rejected for the same reasons set forth in the analysis of claim 4. Refer to “As to claim 4” presented earlier in this Office Action for details. As to claim 15, it recites substantially the same limitations as in claim 5, and is rejected for the same reasons set forth in the analysis of claim 5. Refer to “As to claim 5” presented earlier in this Office Action for details. As to claim 16, it recites substantially the same limitations as in claim 6, and is rejected for the same reasons set forth in the analysis of claim 6. Refer to “As to claim 6” presented earlier in this Office Action for details. As to claim 18, it recites substantially the same limitations as in claim 8, and is rejected for the same reasons set forth in the analysis of claim 8. Refer to “As to claim 8” presented earlier in this Office Action for details. 5. Claims 9, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Piednoel in view of Kim, and further in view of Hira et al. (US Patent Application Publication 2016/0306678, hereinafter Hira). Regarding claim 9, Piednoel in view of Kim does not teach increasing or decreasing the number of SoCs based on the system performance, as recited in the claim. However, Hira teaches wherein the system is configured to predict at least one performance metrics of each of the plurality of SoCs in the computing pool according to a continuous time structural equation model, and is configured to determine whether the at least one performance metrics is lower than or higher than at least one performance metrics threshold so as to determine whether to increase or decrease a number of the plurality of SoCs in the computing pool [Having powered-up the one or more auxiliary SOCs, the auxiliary SOCs may also be configured with an appropriate system/application image for performing the processing of the workload, if not already configured to do so, and the workload is then offloaded from the primary SOC and distributed to the auxiliary SOCs, such as via a Peripheral Component Interconnect Express (PCIE) bus and interface on each of the SOCs, or the other communications pathway between the SOCs … The analytics monitor continues to monitor the bus traffic of the platform, e.g., the pins of the powered-up SOCs and the signals being transmitted across the bus from these pins, to identify conditions where there is an underloading of the platform, e.g., the workload is less than one or more predetermined thresholds … Thus, through the mechanisms of the illustrative embodiments, a sub-cloud is provided within the platform which allows dynamic allocation/de-allocation of resources to application specific workloads … In some illustrative embodiments, as described herein, the resources are general purpose SOCs that are configured dynamically for performing different types of execution on different types of workloads or which have internal cores of various types that are already configured to execute certain types of workloads, e.g., a cryptographic core, a graphics processing core, or the like (¶ 0029-0031); As discussed above, each of the SOCs 422-428 and 440 comprises an internal performance monitor that monitors events occurring within the logic of the SOCs 422-428 and 440 and potentially communicates this information to the analytics monitor 450 via the interconnect bus 430 … Based on the loading condition of the SOC, the analytics monitor 450 may perform operations to increase/decrease the number of auxiliary SOCs powered-up in the SOC pool 420 to which the workload is distributed and/or route the workload back to the primary SOC 440 or a subset of the SOCs 422-428, 440 less than a previously powered-up number of SOCs, e.g., going back from 3 to 2 to 1 SOCs powered-up as needed (¶ 0100); The metrics measured by the cloud system may take many different forms depending upon the particular implementation. Advanced cloud platforms will provide controls for auto-scaling and bursting with application response time being the most common metric. An entity that deploys the workload may set a desired response time of between 50-300 ms (for example). When the average response time begins to drift beyond the upper limit, the cloud system may trigger operations to begin taking steps to scale the workload. In accordance with the illustrative embodiments, the scaling can be done using the pool of SOCs. Of course other metrics may include measuring memory actively used by the application, looking at disk space usage, and the like. Essentially any measurable system parameter may be set as the threshold to scale the cloud computing system (¶ 0121)], wherein the plurality of SoCs comprises a fourth SoC, wherein the fourth SoC is removed from the computing pool and then assigned to another computing pool [Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter) (¶ 0048); In response to the command/signaling from the analytics monitor 450, the PRC hardware block 460 then controls the power, reset, and clocking of the SOCs 422-428 in the pool of SOCs 420 to thereby power-up/power-down a corresponding number of the SOCs 422-428 to offload the processing of the workload to the powered-up SOCs 422-428 … (¶ 0091); In response to the analytics monitor 450 identifying a predetermined condition indicative of an underloaded state of the primary SOC 440, the analytics monitor 450 sends a command/signal to the PRC hardware 460 informing the PRC hardware 460 of the need to power-down one or more of the SOCs 422-428 … (¶ 0098); FIG. 10A-10D illustrate example scenarios of the dynamic powering-up and powering-down of SOCs in a pool of SOCs to facilitate workload distribution in accordance with example illustrative embodiments … (¶ 0119); Thus, with the implementation of the mechanisms of the illustrative embodiments, a pool of general purpose resources, such as general purpose SOCs, may be provided in a low-power consumption state, which may then be dynamically allocated to execution of cloud computing system workloads in response to a determination that one or more of the computing devices in the cloud computing system have become overloaded … (¶ 0136)] Therefore, it would have been obvious for one of ordinary skills in the art prior to Applicant’s invention to increase or decrease the number of SoCs based on the system performance, as specifically demonstrated by Hira, and to incorporate it into the existing scheme disclosed by Piednoel in view of Kim, because doing so allows the number of SoC be dynamically adjusted based on demand, which allows all the resources be optimally utilized. As to claim 19, it recites substantially the same limitations as in claim 9, and is rejected for the same reasons set forth in the analysis of claim 9. Refer to “As to claim 9” presented earlier in this Office Action for details. 6. Claims 7 are 17 are rejected under 35 U.S.C. 103 as being unpatentable over Piednoel in view of Kim, and further in view of Datla et al. (US Patent Application Publication 2022/0067536, hereinafter Datla). Regarding claim 7, Piednoel in view of Kim does not teach locking or unlocking the shared memory to prevent a read operation and a write operation from occurring simultaneously. However, Datla specifically teaches preventing a read operation and a write operation from occurring simultaneously on a shared memory [… The memory management subsystem can leverage this increased downtime to reduce the power consumption of the shared memory unit 140 via: a shared memory unit 140 that is partitioned into discrete memory modules; a conflict resolution scheduler configured to analyze a queue of data transfer requests to the shared memory unit 140, to detect collisions in this shared memory unit queue, and to reorder or pause requests in order to resolve these collisions; and a power management unit configured to track an idle factor of each memory module and selectively switch memory modules into sleep mode based on the idle factor of each memory module (¶ 0016); Generally, the memory management subsystem includes a set of hardware components at the interface between the DMA core 110 and the shared memory unit 140. More specifically, the memory management subsystem, includes a shared memory unit queue configured to store a second set of data transfer requests associated with the shared memory unit 140, a conflict resolution scheduler, and a power management unit configured to, for each memory module in the set of memory modules, switch the memory module to sleep mode in response to detecting an idle factor of the memory module greater than a threshold idle factor. Thus, the memory management subsystem increases data transfer bandwidth between the shared memory unit 140 and the DMA core 110 and therefore increases the rate of data transfer between the shared memory unit 140 and the set of primary memory units 120 (e.g., via the DMA core 110) without increasing the power consumption of the shared memory unit 140. Additionally, the memory management subsystem also prevents read/write collisions (e.g., due to simultaneous reads or writes to the same memory module) despite the increase in read/write requests handled by the shared memory unit 140 (¶ 0050)]. Therefore, it would have been obvious for one of ordinary skills in the art prior to Applicant’s invention to prevent a read operation and a write operation from occurring simultaneously on a shared memory, as specifically demonstrated by Datla, and to incorporate it into the existing scheme disclosed by Piednoel in view of Kim, because Datla teaches doing so allows more efficient usage of the shared memory [As a result of the increase in data transfer bandwidth between the shared memory unit 140 and the primary memory units 120 enabled by the broadcast subsystem 130, the processor system 100 can issue fewer read/write requests to the shared memory unit 140 per unit of memory transferred, thereby resulting in greater downtime for the memory modules of the shared memory unit 140. The memory management subsystem can leverage this increased downtime to reduce the power consumption of the shared memory unit 140 via: a shared memory unit 140 that is partitioned into discrete memory modules; a conflict resolution scheduler configured to analyze a queue of data transfer requests to the shared memory unit 140, to detect collisions in this shared memory unit queue, and to reorder or pause requests in order to resolve these collisions; and a power management unit configured to track an idle factor of each memory module and selectively switch memory modules into sleep mode based on the idle factor of each memory module. In one implementation, the conflict resolution scheduler and the power management unit are hardware-implemented finite state machines or microprocessors imbedded within the processor system 100 (¶ 0016)]. As to claim 17, it recites substantially the same limitations as in claim 7, and is rejected for the same reasons set forth in the analysis of claim 7. Refer to “As to claim 7” presented earlier in this Office Action for details. Allowable Subject Matter 7. Claims 10 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 8. Claims 1-9, and 11-19 are rejected as explained above. Claims 10 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 9. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHENG JEN TSAI whose telephone number is 571-272-4244. The examiner can normally be reached on Monday-Friday, 9-6. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kenneth Lo can be reached on 571-272-9774. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /SHENG JEN TSAI/Primary Examiner, Art Unit 2136 November 27, 2025
Read full office action

Prosecution Timeline

Aug 02, 2023
Application Filed
Dec 30, 2024
Non-Final Rejection — §103
Mar 10, 2025
Response Filed
Mar 30, 2025
Final Rejection — §103
Jun 16, 2025
Request for Continued Examination
Jun 20, 2025
Response after Non-Final Action
Aug 27, 2025
Non-Final Rejection — §103
Nov 05, 2025
Response Filed
Nov 28, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596490
MEMORY MANAGEMENT USING A REGISTER
2y 5m to grant Granted Apr 07, 2026
Patent 12585387
Clock Domain Phase Adjustment for Memory Operations
2y 5m to grant Granted Mar 24, 2026
Patent 12579075
USING RETIRED PAGES HISTORY FOR INSTRUCTION TRANSLATION LOOKASIDE BUFFER (TLB) PREFETCHING IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12572474
SPARSITY COMPRESSION FOR INCREASED CACHE CAPACITY
2y 5m to grant Granted Mar 10, 2026
Patent 12561070
AUTONOMOUS BATTERY RECHARGE CONTROLLER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
83%
With Interview (+13.0%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 790 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month