Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1, 4-8, 11-13, and 16-20 are pending.
Response to Arguments
Regarding: Prior Art Rejections:
Applicant’s amendments and arguments regarding the rejection of claims 1, 4-9, 11-13, and 16-20 under 35 U.S.C 103 have been fully considered but are moot under a new ground of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-6, 9, 11, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Watt US 20150096011 A1 in view of Gomez et al. US 8687805 B2 in view of Watt et al. US 20130290542 A1 (hereinafter Watt#2) in further view of Kumar et. al US 20210286639 A1.
Watt, Gomez, and Watt#2 are cited in a previous office action.
Regarding claim 1, Watt teaches the invention substantially as claimed including:
A method comprising:
responsive to a request received by an application migration service (AMS) executed by a first processor to migrate ([0042] As used herein, a "migrated application configuration" is a transformed configuration where the application is installed and is operative on other, perhaps improved or upgraded computer servers selected from available resources; [0145] The migration manager 101 is the central point of control for all operations concerning application migration. High level details are shown in FIG. 13. The example embodiment provides a user interface 102 that includes a graphical user interface (GUI), a command line interface (CLI), and an application programming interface (API) so that the features of application migration can be used by end users, administrators, and computer automation systems) an application executed in a first compute instance in a source cloud environment ([0015] one or more of the workloads comprising a complex application can be migrated from their original source servers to any other available servers while maintaining the network relationships between them; Claim 1 receiving an application migration request for a migration of the complex computer application to a migrated application configuration) to a second compute instance in a target cloud environment ([0150] application migrater 103 is also responsible for deploying a new copy of an application from the template of a previously captured application, and for migrating an application directly from a running application to a new set of resources; [0017] migration of individual workloads from their source server to any available physical, virtual or cloud server--provided that the source and target server are object code compatible), authenticating by the AMS, credentials of a user with respect to the source cloud environment ([0103] The Migration Manager 101 maintains a database of its authorized users, with a UserAccount record for each user. This contains an ID 951 and name 952, as well as the login credentials 954 used to authenticate the user); [0145] a user interface 102 that includes a graphical user interface (GUI), a command line interface (CLI), and an application programming interface (API) so that the features of application migration can be used by end users, administrators, and computer automation systems. Access to all of the interfaces is authenticated using a username and password);
responsive to the credentials of the user being successfully authenticated ([0180] The first step is to install the capture agent 1131 onto the source servers at step 1841 ... agent installation can be automated if the node's administrative credentials are made available to the migration manager):
the first compute instance being deployed in a first customer tenancy of the user in the source cloud environment (Fig. 1 El 150-153 The Original Complex Application; [0042] an "initial application configuration" is one where the application may have been initially installed and is operative on various initial computer servers);
reserving, by the AMS, a source agent executed by a third processor ([0113] a capture agent 1131 that is installed on the source server 1130),
obtaining, by the source agent, one or more artifacts and configuration information that enable execution of the application ([0113] As shown in FIG.11, the workload migrater 110 used by the example embodiment works in conjunction with a capture agent 1131 that is installed on the source server; [0114] The capture agent 1131 associated with the workload migrator 110 gathers source image information 1101 about the source server 1130 and its image, reporting the information back to the workload migrater 110. The capture agent can also capture the server image 1191 to an image library 190 or stream the image directly to a target server 1140. After streaming or capturing its image, the capture agent can synchronize all changes that have been made to the source server's image since the last capture or synchronization directly to the target server or to the image library where they are stored as an incremental capture 1194; [0116] The source image information 1101 contains system and application configuration data collected from the source image being migrated. This data includes the operating system vendor and version, the size and layout of the file systems, and the number of network interfaces and their configuration. During an image capture, the source image configuration data 1192 is also stored in the image library 190 along with the captured image);
storing, by the source agent, the one or more artifacts and configuration information in an encrypted database ([0114] The capture agent can also capture the server
image 1191 to an image library 190; [0116] During an image capture, the source image
configuration data 1192 is also stored in the image library 190 along with the captured image;
[0143] The internal structure of the image library can be segmented by end user identity in
order to provide secure multi-tenancy. To further improve security in a multi-tenant environment, images can be encrypted with a user-specific key while stored in the library);
wherein responsive to storing the one or more artifacts and configuration information in the encrypted database, the source agent is released ([0187] The capture agents can connect to the provisioning network for the duration of migration tasks).
reserving a target agent implemented by a fourth processor ([0113] a deploy agent 1141 that is installed on the target server 1140), wherein the source cloud environment is different than the target cloud environment ([0113] source server 1130 … target server 1140);
instantiating, by the target agent, the second compute instance in a second customer tenancy of the user in the target cloud environment ([0118] A deploy process 1104 of the workload migrater 110 manages the deploy agent 1141 through the steps of deploying a captured image to the target server. It gathers source image information 1102 about the server from the agent, compares it with the configuration of the original server and its workload 1101, considers any requirements specified by the end user or migration manager as specified by a deployment profile 1110, and determines how to map the image onto the resources available on the target server. For example, the deploy process 1104 might consolidate multiple file systems that had originally been on separate disk drives onto the single virtual drive available on the target), the second customer tenancy being different than the second service tenancy;
retrieving, by the target agent, the one or more artifacts and configuration information from the encrypted database ([0115] the deploy agent streams the captured image from the ... image library; [0143] images can be encrypted with a user-specific key while stored in the library);
installing, by the target agent, the one or more artifacts and configuration information in the second compute instance ([0113] a deploy agent 1141 that is installed on the target server 1140; [0115] The deploy agent 1141 associated with the workload migrater 110 gathers target server information 1102 about the target server 1140 and reports it back to the workload migrater 110. Upon receiving instructions from the workload migrater, the deploy agent streams the captured image from the ... image library and deploys it to the target server along with any additional software packages and configuration changes specified by the workload manager; [0116] The source image information 1101 contains system and application configuration data collected from the source image being migrated. This data includes the operating system vendor and version, the size and layout of the file systems, and the number of network interfaces and their configuration. During an image capture, the source image configuration data 1192 is also stored in the image library 190 along with the captured image); and
responsive at least to completion of installation of the one or more artifacts and configuration information in the second compute instance by the target agent, releasing the target agent ([0187] The ... deploy agents can connect to the provisioning network for the duration of migration tasks; [0115] the deploy agent streams the captured image from the source server or image library and deploys it to the target server along with any additional software packages and configuration changes specified by the workload manager).
While Watt discloses a pair of keys including a public key and a private key ([0064]
public/private key pairs), Watt does not teach:
generating, by the AMS, a pair of keys including a public key and a private key;
transmitting, by the AMS, the public key to a service manager executed by a second
processor, the service manager being configured for injecting the public key in the application
executed in the first compute instance of the source cloud environment, the first compute instance being deployed in a first customer tenancy of the user in the source cloud environment; and
assigning, by the AMS, the private key to the source agent;
obtaining, by the source agent, one or more artifacts and configuration information that enable execution of the application based on the private key.
However, Gomez teaches generating, by the AMS, a pair of keys including a public key and a private key (Cols 1,2 lines 66-67, 1-2 A method of ... generating a key pair, in particular a public key and a private key, for secure transmission of data between at least two applications or application programs);
transmitting, by the AMS, the public key to a service manager executed by a second
processor, the service manager being configured for injecting the public key in the application
executed in the first compute instance of the source cloud environment (Col 2 lines 62-64 A method of obtaining ... a public key ... by an application for secure transmission of data between at least two applications; Col 2 Lines 38-53 The applications may be respectively associated with a public/private key pair ... In particular, the public key of the application the generated key pair is sent to is used for encryption. Thus it can be ensured that the generated key pair is only available or accessible for applications which had requested the generation of the key pair);
assigning, by the AMS, the private key to the source agent (Col 2 lines 62-64 A method of obtaining ... a private key, by an application for secure transmission of data between at least two applications; Col 7 lines 20-24 corresponding private key are delivered to the ... user ... After having received the key pair, any data can be signed, encrypted between the first and second user);
obtaining, by the source agent, one or more artifacts and configuration information that enable execution of the application based on the private key (Col 7 lines 43-46 The latter sends a challenge to be signed with the corresponding private key. The mobile application sends back the signed challenge. The third party checks the received signature; Col 5 lines 46-49 Once the first and second mobile application 16a, 16b have received the generated key pair and decrypted it with their respective private key any data can be transmitted between the first and second application using the generated key pair).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Watt with Gomez because Gomez’s teaching of utilizing public and private keys to securely transmit data would have provided Watt’s system with the advantage and capability to ensure accurate and secure transfer of the application and its data (Gomez Col 1 lines 15-19 In the scope of data exchange, confidentiality, integrity and availability of data is an important issue. Public Key Infrastructure (PKI) provides mechanisms for ensuring integrity (e.g. data signature) and confidentiality mechanisms (e.g. session key)).
Watt and Gomez do not teach the first service tenancy being different than the first customer tenancy and the second customer tenancy being different than the second service tenancy.
However, Watt#2 teaches the first service tenancy being different than the first customer tenancy ([0028] the capture agent runs on a computer other than the source server) and the second customer tenancy being different than the second service tenancy ([0051] In another embodiment the deploy agent runs on some other computer system that has access to the target server's system storage; [0051] the deploy agent 125 is installed on some other computer that has access to the storage system that will be used for the deployed image).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Watt#2’s separation of agent and target/source servers with the system of Watt and Gomez. A person of ordinary skill in the art would have been motivated to make this combination to provide Watt and Gomez’ system with flexibility in determining where capture/deploy agents are installed (see Watt#2 [0051] However, it will be understood that alternate embodiments wherein the deploy agent 125 is installed on some other computer, will include similar steps in a deploy process).
Watt, Gomez, and Watt#2 do not explicitly teach reserving from and releasing to both a pool of source agents deployed in the first service tenancy of the source cloud environment and a pool of target agents deployed in the second service tenancy of the target cloud environment.
However, Kumar teaches reserving from and releasing to both a pool of source agents deployed in the first service tenancy of the source cloud environment and a pool of target agents deployed in the second service tenancy of the target cloud environment (Fig 2A Source Subsystem 201 Data Agent(s) 242A; Fig 2A Destination Subsystem 203 Data Agent(s) 242B; [0139]-[0142] Data Agents).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Kumar’s source and destination agent pools with the existing system. A person of ordinary skill in the art would have been motivated to make this combination to provide the resulting system with the advantage of selecting specialized agents to perform migration tasks (see Kumar [0141] Each data agent 142 may be specialized for a particular application 110…).
Regarding claim 4, Watt, Gomez, and Watt#2 teach the method of claim 1.
Watt further teaches wherein the pair of keys is ephemeral and associated with the request to migrate the application ([0035] Memory 206 temporarily stores data encryption key 224 and data decryption key 225; [0063] Upon account registration, a user registers for an authentication account with the server to become a registered user. A client device corresponding to the user generates and stores a set of cryptographic keys ... The client device sends to the server the data decryption key on each authentication request).
Regarding claim 5, Watt, Gomez, Watt#2, and Kumar teach the method of claim 1.
Watt further teaches wherein injecting, by the service manager, the
public key in the application corresponds to storing in an authorized key file
associated with the application ([0103] The Migration Manager 101 maintains a database of its authorized users ... This contains ... the login credentials 954 used to authenticate the user).
Gomez teaches storing the public key (Col 6 lines 40-42 Subsequently the public key associated with the first mobile application 16a is added to the context information and the context information is signed (operation T12); Col 5 lines 65-67 the public an [sic] private key is generated taking into account the received context information. The public key may correspond to the context representation; Col 6 lines 52-54 any data can be signed with the context-based private key and the signature can be checked with the context-based public key).
Regarding claim 6, Watt, Gomez, Watt#2, and Kumar teach the method of claim 1.
Watt further teaches wherein each of the first compute instance and the second compute instance is a virtual machine ([0113] server image migration can also be performed without the use of a capture and/or deploy agent if the workload migrator has access to the server's image, such as ... when it is stored on a hypervisor host; [0136] it uses its infrastructure manager to create the target VM at step 1268 and to configure its virtual network interfaces such that they are placed on the proper local VLAN at step 1269).
Regarding claim 11, Watt, Gomez, Watt#2, and Kumar teach the method of claim 1.
Watt further teaches: creating, by the AMS, a virtual network interface card (VNIC} to be associated with the target agent (Fig 5 Elements 530, 531, 532; [0060] the VNA is a trusted component of the network infrastructure, it is trusted to use a tagged VLAN interface 510 for th is segment and to multiplex it with any other tagged VLAN segments over a single network interface {NIC} 530; [0059] an overlay network has been configured on a VNA; [0187] In order for invention system constructed as described herein to function on such isolated networks, they must be tied into a virtual overlay network using a VNA that has access to both the isolated network and an external network on which it can establish tunnel connections to other VNAs. One approach for handling isolated environments is to create a special "provisioning" overlay network ... deploy agents can connect to the provisioning network for the duration of migration tasks; [0112] a workload migrater 110 is responsible for deploying a VNA into a network domain; [0115] The deploy agent 1141 associated with the workload migrater 110), wherein the one or more artifacts and configuration information are installed by the target agent in the second compute instance via the VNIC (Fig 5 Elements 530, 531, 532; [0060] As the VNA is a trusted component of the network infrastructure, it is trusted to use a tagged VLAN interface 510 for this segment and to multiplex it with any other tagged VLAN segments over a single network interface {NIC} 530; [0187] deploy agents can connect to the provisioning network for the duration of migration tasks; [0115] deploy agent streams the captured image from the source server or image library and deploys it to the target server along with any additional software packages and configuration changes specified by the workload manager).
Regarding claim 13, it is the computer readable medium of claim 1. Therefore, it is rejected for the same reasons as claim 1
Regarding claim 18, it is the computer readable medium of claim 11. Therefore, it is rejected for the same reasons as claim 11.
Regarding claim 20, it is the computing device of claim 1. Therefore, it is rejected for the same reasons as claim 1.
Claims 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Watt US 20150096011 A1 in view of Gomez et al. US 8687805 B2 in view of Watt#2 US 20130290542 A1 in view of Kumar et. al US 20210286639 A1 further view of Kapoor et al. US 20180196655 A1.
Kapoor is cited in a previous office action.
Regarding claim 12, Watt, Gomez, Watt#2, and Kumar teach the method of claim 1.
Watt, Gomez, Watt#2, and Kumar do not explicitly teach wherein the application to be migrated from the source cloud environment to the target cloud environment is a platform-as-a-service application.
However, Kapoor teaches wherein the application to be migrated from the source cloud environment to the target cloud environment is a platform-as-a-service application ([0038] the system 300 migrates the application 311 from the PaaS 301 to the updated environment 330).
Kapoor teaches PaaS Application migration whereas Watt does not specifically teach
migration of a PaaS application. The PaaS service model is well known in the art (Kapoor [0023] Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider). One of ordinary skill in the art is able to substitute in the PaaS application of Kapoor for the generic application of Watt with predictable results (Kapoor [0037] a conformance checker validates conformance requirements and a reconstructor is configured to reconstruct application dependencies when migrating applications from a first environment (such as PaaS) to an updated environment (e.g., a target environment updated to receive the applications; the target environment can also be a PaaS environment). Technical effects and benefits of the migrating system herein include supporting, by utilizing a conformance checker and a dependency reconstructor, required special security policies of PaaS applications the serve and bind to information-sensitive end-user data when migrating PaaS applications to updated environments).
It would have been obvious to a person having ordinary skill in the art before the
effective filing date of the invention to have combined, via simple substitution, the migration of the PaaS application of Kapoor with the generic application migration system of Watt with the security method of Gomez resulting in a migration system with data security/integrity that is able to migrate PaaS applications between cloud environments.
Regarding claim 19, it is the computer readable medium of claims 12. Therefore, it is rejected for the same reasons as claim 12.
Allowable Subject Matter
Claims 7, 8, 16, and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https:/www.uspto.gov/interviewpractice.
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to HARRISON LI whose telephone number is (703) 756-1469. The
examiner can normally be reached Monday-Friday 9:00am-5:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing
using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is
encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Aimee Li can be reached on 571-272-4169. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.L./
Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195