Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The instant application having Application No. 18/360,247 filed on 7/27/2023 is presented for examination.
Examiner Notes
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Priority
Acknowledgement is made of applicant’s claim for priority based on application IN202221043162 filed in REPUBLIC OF INDIA on 07/28/2022.
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Drawings
The applicant’s drawings submitted are acceptable for examination purposes.
Authorization for Internet Communications
The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03):
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Please note that the above statement can only be submitted via Central Fax, Regular postal mail, or EFS Web.
Information Disclosure Statement
As required by M.P.E.P. 609, the applicant’s submissions of the Information Disclosure Statement dated 1/22/2024 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 4-6, 10-14, and 16-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li (US 2012/0110237).
As per claim 1, Li discloses a system for facilitating workload portability, the system comprising:
a target server instantiated at a target platform, wherein the target server is configured to:
store, at the target platform, a snapshot of a workload executing at a source platform, wherein the snapshot is captured at each defined time-interval and corresponds to an incremental change in the workload in raw state, (Paragraph 37 “Under an implementation, in the embodiment of the present invention, the source physical machine 100 has an OS running thereon, and the OS has at least one service 101 and a Live-P2V logic functional entity 102 running thereon, wherein, the Live-P2V logic functional entity 102 is further configured to initially synchronize disk snapshot data from the source physical machine 100 to the target virtual machine 201 at a first time point; monitor a disk I/O writing operation in the source physical machine since the first time point; incrementally synchronize updated disk data in the source physical machine 100 to the target virtual machine 201, and stop the monitoring when an increment value of the disk I/O writing operation in the source physical machine is less than a second threshold; or stop the monitoring when a sum of the increment value of the disk I/O writing operation in the source physical machine and the increment value of the updated memory page is less than a third threshold.) and wherein the workload stored at the target platform includes one or more boot files and one or more data files (Paragraph 88 “Specifically, according to the basic configuration information of the virtual machine in S501, reconfiguring the disk mirror file of the Xen virtual machine may include: updating Boot files, changing driving files, adding drivers of virtual hardware, and modifying device files including hda, hdb and cdrom as device files of the virtual machine.”);
update, based on a trigger pertaining to workload portability of the workload, the one or more boot files and the one or more data files with configuration supported by the target platform (Paragraph 88); and
execute the updated one or more boot files and the updated one or more data files at the target platform, wherein the execution of the workload at the target platform is identical to the execution thereof at the source platform (Paragraphs 12-13 “incrementally synchronizing data of the updated memory page in the source physical machine to the target virtual machine, and stopping the monitoring when an increment value of the updated memory page in the source physical machine is less than a first threshold; and calling the virtualization platform VMM Host to resume the target virtual machine to a running state.”)
As per claim 2, Li further discloses further comprising a source server instantiated at the source platform, wherein the source server is configured to capture the snapshot of the workload and communicate the captured snapshot to the target server at each defined time-interval (Paragraphs 9-18).
As per claim 4, Li further discloses wherein the target server is further configured to execute a pull operation to retrieve the snapshot of the workload from the source platform at each defined time-interval (Paragraph 49 “To be noted, herein the incrementally synchronizing may be carried out at a preset cycle, e.g., at an interval of 1 s from the second time point, and the cycle may be flexibly set according to the actual application scene. Herein the increment value may be a size of data of the updated memory page in the source physical machine monitored in the current cycle and to be synchronized.”).
As per claim 5, Li further discloses wherein the snapshot includes raw data associated with a boot file or a data file of the workload being executed on a first virtual machine hosted at the source platform (Paragraph 88).
As per claim 6, Li further discloses wherein the update of the one or more boot files and the one or more data files is performed in an offline manner (Paragraph 88).
As per claim 10, Li further discloses wherein the one or more boot files are stored separately from the one or more data files (Paragraph 88).
As per claim 11, Li further discloses wherein the one or more boot files are updated with one or more system files, one or more service files, one or more drivers, boot configuration, network configuration, or display configuration supported by the target platform, and the one or more data files are updated with a file system configuration and/or an operating system configuration supported by the target platform (Paragraph 85-100).
As per claim 12, Li further discloses wherein the trigger pertaining to the workload portability of the workload is associated with one of a group consisting of a migration event and a recovery event associated with the workload (Paragraph 83).
As per claim 13, Li further discloses wherein to execute the workload at the target platform, the target server is further configured to attach the updated one or more boot files and the updated one or more data files to a third virtual machine hosted at the target platform and boot the third virtual machine (Paragraph 37).
As per claim 14, Li further discloses wherein the target server is further configured to replicate, on the third virtual machine, security and access control configuration associated with the workload executing at the source platform (Paragraph 37).
As per claim 16, Li further discloses wherein the workload is one of from a group consisting of operating systems, containers, one or more applications running on the operating systems, and data associated with the one or more applications (Paragraph 36).
As per claim 17, it is a method claim having similar limitations as cited in claim 1 and is thus rejected under the same rationale.
As per claim 18, it is a medium claim having similar limitations as cited in claim 1 and is thus rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Sharpe (US 9,811,662).
As per claim 3, Li does not expressly disclose but Sharpe discloses wherein the source server is further configured to encrypt the captured snapshot prior to the communication of the captured snapshot to the target server, and wherein the target server is further configured to decrypt the communicated snapshot prior to the storage at the target platform (Column 12, lines 50-65 “A number of factors affect the performance of accessing data from a cloud storage system. In a typical computer data is stored locally on a disk, and a number of hardware and operating system mechanisms attempt to minimize the latency of reads and writes. For instance, processors and operating systems strive to load frequently used data into memory and multiple levels of hardware caches, thereby reducing the latency associated with reading data from disk. Accessing data stored on a cloud storage system involves an additional set of latencies. For instance, in addition to normal disk latency, accessing a cloud storage system may involve additional latency due to network latency, network protocol handshaking, network transfer times, and delays associated with encryption or decryption. One of the challenges of a distributed filesystem is minimizing such latencies as much as possible.”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention because it provides for the purpose of securing the snapshots while in transit so as to protect the data. In this way, the combination benefits from the increase security of the product.
As per claim 15, Li does not expressly disclose but Sharpe discloses wherein the target server is further configured to determine authenticity of the snapshot upon reception at the target platform (Column 77, lines 20-42 “In some implementations, permissions and authentication for a distributed filesystem are provided using standard authentication techniques (e.g., an Active Directory service, an NT LAN Manager (NTML), the Kerberos protocol, etc.). Cloud commands for the distributed filesystem can be implemented to leverage such existing authentication techniques as well as existing filesystem abstractions. More specifically, users attempting to access cloud command functionality can do so via existing filesystem mechanisms (e.g., initiating cloud commands by invoking special files or scripts that appear in the CLOUDCMD branch of the distributed filesystem, as described above) and can be authenticated using their existing user names and credentials. These capabilities allow system administrators to delegate cloud command permissions using existing filesystem commands and permissions, thereby allowing trusted users to perform some management activities and potentially reducing the load upon IT staff. Note that in some scenarios users may also be granted the permission to delegate privileges to other users. For instance, a system administrator may grant a trusted user both the permission to invoke snapshots as well as the permission to grant the permission to invoke snapshots to other users. This trusted user can then grant the permission to invoke snapshots to a third user without requiring further interaction or permission from the system administrator.”).
Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Madhu (US 2016/0048408).
As per claim 7, Li does not expressly discloses but Madhu discloses wherein the one or more boot files and the one or more data files are stored in one or more block storages associated with the target platform, and wherein the target server is further configured to attach the one or more block storages to a second virtual machine hosted at the target server to enable the update of the one or more boot files and the one or more data files (Paragraph 128 “Other bootstrap operations may include: creating a private network in the on premise data center; creating a local prototype data mover attached to the private network; setting up the private network; creating a private network in the cloud; bridging the on premise and cloud private networks; configuring local and remote repositories; creating EBS volumes; grouping EBS volumes to create a repository; for each group, attach the EBS volumes to the gateway and initialize the group.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention because it ensures that newly instantiated virtual machines may be bootstrapped in accordance with commonly practiced methods. In this way, the combination benefits from the ease of the bootstrapping process.
As per claim 8, Li does not expressly discloses but Madhu discloses wherein the target server stores the snapshot at each defined time-interval in the one or more block storages (Paragraphs 51-53).
As per claim 9, Li does not expressly discloses but Madhu discloses wherein the target server stores the snapshot at each defined time-interval in one of a group consisting of one or more object storages, one or more file storages, and one or more snapshot storages associated with the target platform, and wherein in response to the trigger, the target server is further configured to move the one or more boot files and the one or more data files to one or more block storages associated with the target platform (Paragraphs 51-53 and 128).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mitkar (US 2017/0262520) disclose a process of performing snapshot replication operations (e.g., maintaining a mirror copy of primary data at a secondary location by generating snapshots of the primary data). The system can collect and maintain cumulative block-level changes to the primary data after each sub-interval of a plurality of sub-intervals between the snapshots. When a snapshot is generated, any changes to the primary data not reflected in the cumulative block-level changes are identified based on the snapshot and transmitted to the secondary location along with the cumulative block-level changes. By the time the snapshot is generated, some or all of the changes to the primary data associated with the given snapshot have already been included in the cumulative block-level changes, thereby reducing the time and computing resources spent to identify and collect the changes for transmission to the secondary location.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY A MUDRICK whose telephone number is (571)270-3374. The examiner can normally be reached 9am-5pm Central Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571)272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIMOTHY A MUDRICK/Primary Examiner, Art Unit 2198 1/07/2026