Prosecution Insights
Last updated: April 19, 2026
Application No. 18/510,937

METHOD FOR PROVIDING A SECONDARY BACKUP APPLICATION AS A BACKUP FOR A PRIMARY APPLICATION

Non-Final OA §102
Filed
Nov 16, 2023
Examiner
NGUYEN, KIM T
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
Carnegie Mellon University
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
96%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
1607 granted / 1844 resolved
+32.1% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
13 currently pending
Career history
1857
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
22.7%
-17.3% vs TC avg
§102
36.5%
-3.5% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1844 resolved cases

Office Action

§102
DETAILED ACTION Response to Arguments Applicant’s arguments filed on 09/18/2025 have been fully considered but they are not persuasive for the following reasons: Applicant argues that Block does not disclose “wherein the predictive standby manager processes both the received application-specific state data and the received platform-specific state data in order to initiate the backup process”. However, Block discloses (on figs.2-5, 9, [0009]-[0010], [0031]-[0032]) [0009] various aspects of the subject innovation facilitate migration for state of an application, from a primary machine to a backup machine in platform virtualization systems. In one aspect, such migration can occur via a hybrid approach--wherein both a virtual machine monitor (e.g., a hypervisor), and an application itself (e.g., an application running inside a virtual machine), determine states that are to migrate from the primary machine to the backup machine. The application can determine information about states that are required to be migrated, and further communicates such information to the hypervisor (e.g., a direct communication occurring between the application and the hypervisor--without assistance of local operating system of the platform virtualization system.) Based on such information received, the hypervisor arranges for migration of the required states over to the backup virtual machine. For example, the application can indicate transaction boundaries for each migration to the hypervisor, and maintain consistency of the state to be transferred. Hence, overhead (e.g., bandwidth, computer cycles) and associated delays can be mitigated, as there no longer exists a requirement to track state of the virtual machine as a whole during migration processes. In a related aspect, a shared-memory based signaling mechanism can be employed to communicate critical memory areas and transaction boundaries to the hypervisor (e.g., by employing a shared page between the application and the hypervisor.) Memory areas that are to be saved across failures are initially determined, and subsequently communicated to the hypervisor through the shared memory and based on the signaling mechanism. For example, the hypervisor can employ a shadowed page table mechanism, to track any "writes" in such memory areas, and migrate them to the backup machine. After the application signals a transaction boundary, the hypervisor can commit the updates that were sent to the backup via replacing the original memory of the backup with the updated memory. Moreover and during such committing, the hypervisor can freeze the application and/or virtual machine. In addition, Block discloses (FIG. 2) a further exemplary aspect of a state migration component 210 that employs a shared memory 215, in accordance with an aspect of the subject innovation. In one aspect, a shared-memory based signaling mechanism can be employed to communicate critical memory areas and transaction boundaries to the hypervisor 221 (e.g., by employing a shared page between the application and the hypervisor.) Memory spaces 230 that are to be saved across failures are initially determined, and subsequently communicated to the hypervisor 221 through the shared memory 215 and based on the signaling mechanism. In general, the system memory can typically be divided into individually addressable unit or units, commonly referred to as "pages," each of which can contain many separately addressable data words, which typically can include several bytes. Such pages can further be identified by addresses commonly referred to as "page numbers." The application(s) 225 can desire a state thereof to be moved from a first machine (e.g., an active machine that sends the data) to a second machine (e.g., a backup machine or a passive machine that receives such data), wherein the state migration component 210 can exist as part of both machines. Items that are to be copied on the secondary machine can be identified and an associated period for such transfer designated. Subsequently, a negotiation can occur between the first machine and the second machine. The application on machines can directly communicate with hypervisor on machine.sub.1 (e.g., without employing an associated operating system) and include a state that is required to be transferred to machine.sub.2. As such, the hypervisor 221 enables mapping address space from the application on to the backup machine. For example, the hypervisor 221 can mark-up memory areas of interest--wherein as the application 225 is processing the data, the hypervisor 221 is marking areas that have been changed and marked as "dirty" (e.g., changed.) Subsequently, the hypervisor 221 and the application 225 can initiate migrating of data to backup server side. The application 225 can signal end transaction, wherein as part thereof--the hypervisor 221 ensures that the application 225 is frozen and memory areas can be copied to the backup server. According to a further aspect, the dirty pages can be communicated to the application, which further implements the memory copying. Such aspects of the subject innovation can increase efficiencies associated with various operations (e.g., computationally.). Applicant argues that Examiner is entitled to give claim limitations their broadest reasonable interpretation in light of the specification. See MPEP 2111 [R-1 ] Interpretation of Claims-Broadest Reasonable Interpretation During patent examination, the pending claims must be ‘given the broadest reasonable interpretation consistent with the specification’. Applicant always has the opportunity to amend the claims during prosecussion and broad interpretation by the examiner reduces the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Prater, 162 USPW 541,550-51 (CCPA 1969). Claim Rejections - 35 USC § 102 3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-10 rejected under 35 U.S.C. 102(a)(1) as being anticipated by Frederick P. Block (US-20120054409-A1). As per claim 1, Block teaches “a method for providing a secondary backup application as a backup for a primary application, for a predictive standby in a distributed system, the method comprising the following steps carried out by a predictive standby manager: “receiving application-specific state data, the application-specific state data being obtained from monitoring a state of the primary application,” ([0009], [0024]); “receiving platform-specific state data, the platform- specific state data being obtained from monitoring a state of at least one platform that executes the primary application,” ([figs.2, 5, 7-9, [0009]-[0010], [0024]-[0025]); and “initiating a backup process for using the secondary backup application as the backup for the primary application based on the received application-specific state data and the received platform-specific state data, wherein the predictive standby manager processes both the received application-specific state data and the received platform-specific state data in order to initiate the backup process,” ([0009], [0024]). As per claim 2, Block further shows “wherein the platform-specific state data results from a monitoring of dynamic properties of a digital communication and/or computation infrastructure of the at least one platform to consider an influence of the dynamic properties on a functioning of the primary application for the initiation of the backup process,” ([0009]-[0010]). As per claim 3, Block further shows “wherein the backup process includes at least one of the following: running the secondary backup application, deciding on deploying the secondary backup application, activating the secondary backup application, suspending the secondary backup application, triggering updates of data from the primary application to keep the secondary backup application,” ([0032]). As per claim 4, Block further shows “wherein the backup process includes deploying the secondary backup application on the same hardware platform that executes the primary application,” ([0024]). As per claim 5, Block further shows “wherein the backup process includes deploying the secondary backup application on at least one different platform than the at least one platform that executes the primary application, the at least one different platform and the at least one platform being part of the distributed system, and a communication infrastructure of the at least one different platform being automatically reconfigured so that the secondary backup application: (1) takes over an operation of the primary application, and/or (ii) receives data required for taking over the operation of the primary application, and/or (iii) uses connections to sensors and/or actuators and/or input and/or output-interfaces previously used for the primary application,” ([0032]). As per claim 6, Block further shows “wherein the step of initiating the backup process further includes the following steps”: “detecting a critical state including a failure and/or a redundancy requiring safety- and/or time-critical mode, of the primary application, based on an evaluation of the received application-specific state data, and activating the secondary backup application when the critical state is detected,” ([0043]). As per claim 7, Block further shows “wherein the step of initiating the backup process further includes the following steps”: “predicting a critical state including a failure and/or a transient software failure and/or a runtime error and/or a time until a potential crash, of the primary application, based on an application of machine learning using the received application-specific state data and/or based on monitoring compute operations and/or memory transactions,” ([0044]); and “activating the secondary backup application based on the prediction,” ([[0044]). As per claim 8, Block further shows “wherein the step of initiating the -backup process further includes the following steps”: “predicting a critical state including a hardware failure of the platform, based on an evaluation of the received platform-specific state data,” ([0031]); and “activating the secondary backup application based on the prediction,” ([0031]). As per claim 9, Block teaches “a non-transitory computer-readable medium on which is stored a computer program including instructions for providing a secondary backup application as a backup for a primary application, for a predictive standby in a distributed system, the instructions, when executed by a computer, causing the computer to perform the following steps using a predictive standby manager”: “receiving application-specific state data, the application-specific state data being obtained from monitoring a state of the primary application,” ([0009], [0024]); “receiving platform-specific state data, the platform- specific state data being obtained from monitoring a state of at least one platform that executes the primary application,” (figs.2, 5, 7-9, [0009]-[0010], [0024]-[0025]); and “initiating a backup process for using the secondary backup application as the backup for the primary application based on the received application-specific state data and the received platform-specific state data, wherein the predictive standby manager processes both the received application-specific state data and the received platform-specific state data in order to initiate the backup process,” ([0009], [0024]). As per claim 10, Block teaches “a data processing apparatus configured to provide a secondary backup application as a backup for a primary application, for a predictive standby in a distributed system, the data processing apparatus configured to”: “receive application-specific state data, the application-specific state data being obtained from monitoring a state of the primary application,” ([0009], [0024]); “receive platform-specific state data, the platform- specific state data being obtained from monitoring a state of at least one platform that executes the primary application,” (figs.2, 5, 7-9, [0009]-[0010], [0024]-[0025]); and “initiate a backup process for using the secondary backup application as the backup for the primary application based on the received application-specific state data and the received platform-specific state data, wherein the predictive standby processes both the received application-specific state data and the received platform-specific state data in order to initiate the backup process” ([0009], [0024]). As per claim 11, Block further shows “wherein the platform-specific state data includes at least one metric comprising a temperature of the at least one platform, a load of a CPU of the at least one platform, a utilization of the CPU, a load of a memory infrastructure of the at least one platform, or a utilization of the memory infrastructure,” (figs. 2-5, 9, [0031]-[0032]). As per claim 12, Block further shows “wherein the platform-specific state data includes at least one metric comprising a temperature of the at least one platform, a load of a CPU of the at least one platform, a utilization of the CPU, a load of a memory infrastructure of the at least one platform, or a utilization of the memory infrastructure,” (figs. 2-5, 9, [0031]-[0032]). As per claim 13, Block further shows “wherein the platform-specific state data includes at least one metric comprising a temperature of the at least one platform, a load of a CPU of the at least one platform, a utilization of the CPU, a load of a memory infrastructure of the at least one platform, or a utilization of the memory infrastructure,” (figs. 2-5, 9, [0031]-[0032]). Allowable Subject Matter 5. Claims 1-13 would be allowable if rewritten or amended to overcome the rejections as set forth in this Office action. Conclusion 6. The prior art made of record, listed on PTO 892 provided to Applicant is considered to have relevancy to the claimed invention. Applicant should review each identified reference carefully before responding to this office action to properly advance the case in light of the prior art. Contact Information 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM T NGUYEN whose telephone number is (571)270-1757. The examiner can normally be reached on Mon-Thurs 6-4:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached on (571)272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Nov. 06, 2025 /KIM T NGUYEN/Primary Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Nov 16, 2023
Application Filed
Oct 15, 2024
Non-Final Rejection — §102
Jan 21, 2025
Response Filed
Apr 17, 2025
Final Rejection — §102
Sep 18, 2025
Request for Continued Examination
Sep 24, 2025
Interview Requested
Sep 29, 2025
Response after Non-Final Action
Nov 07, 2025
Non-Final Rejection — §102
Nov 12, 2025
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602388
GENERATIVE SEARCH ENGINE TEXT DOCUMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12596735
SEMANTIC TEXT ANALYSIS FOR GLOSSARY MAINTENANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12596688
Managed Directories for Virtual Machines
2y 5m to grant Granted Apr 07, 2026
Patent 12591579
Aggregation Operations In A Distributed Database
2y 5m to grant Granted Mar 31, 2026
Patent 12586095
METHODS AND APPARATUS TO ANALYZE AND ADJUST DEMOGRAPHIC INFORMATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
96%
With Interview (+8.4%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 1844 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month