Prosecution Insights
Last updated: April 19, 2026
Application No. 18/441,954

Opportunistic Migration of Data Between Cloud Storage Provider Systems

Final Rejection §103
Filed
Feb 14, 2024
Examiner
NGUYEN, HAO HONG
Art Unit
2447
Tech Center
2400 — Computer Networks
Assignee
Attimis Corporation
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
202 granted / 301 resolved
+9.1% vs TC avg
Strong +38% interview lift
Without
With
+37.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
333
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
3.1%
-36.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 301 resolved cases

Office Action

§103
DETAILED ACTION Applicant’s Amendment filed on December 17, 2025 has been reviewed. Claims 1, 8, 11 and 16 are amended in the amendment. Claims 1-16 have been examined. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7-14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Beard et al. (WO 2015/031540 A1), hereinafter referred to as Beard, in view of Prahlad et al. (US 2019/0179805 A1), hereinafter referred to as Prahlad. With respect to claim 1, Beard teaches A method of data processing using a proxy (providing a transparent virtualization of the source filer 102, so that the client terminals continue to issue requests for use of the source filer 102 for purpose of intercepting and proxying client/source filer exchanges, para. 0047), the method comprising: receiving, from a client device, a first data object request requesting a first data object, wherein the first data object request is directed by the client device to a first networked data storage server (handling requests from individual clients for file system objects of the source file system; in handling requests, the one or more processors identify a file handle specified in a given request from one of the plurality of clients, and retrieve, from the source file system, a set of metadata associated with the specified file handle, para. 0019); determining, using the proxy, whether the requested first data object is available from a second networked data storage server (determining whether the file handle specified in the given request identifies a first file system object on the source file system and a second file system object that is not the counterpart to the first file system object stored in the target memory; in response to determining that the file handle specified in the given request identifies the first file system object on the source file system and the second file system object in the target memory, the server removes the second file system object from the target memory and stores, in the target memory, the first file system object in association with the file handle specified in the given request, para. 0018), if the requested first data object is available from the second networked data storage server, retrieving the requested first data object from the second networked data storage server using the proxy (when a particular file system object is deemed valid, the target memory 30 can be used to provide a response to a corresponding client request 11, para. 0038; when the migration has progressed to the point that the destination filer 104 provides the responses to the client requests 111, the mapper 160 can translate the attributes of a file system object retrieved from the destination filer 104, para. 0072); if the requested first data object is not available from the second networked data storage server, retrieving the requested first data object from the first networked data storage server using the proxy (scanning the file system objects of the source filer 102; issuing requests to the source filer 102 for purpose of scanning the source filer; the attributes for individual file system objects can used to determine whether the particular file system object had previously been migrated to the destination filer 104; if the data migration system 100 has not acquired the attributes for a file system object, then the object deemed as being non-migrated or newly discovered; once identified, the attribute for each such file system object is retrieved, para. 0108); providing the requested first data object to the client device (when a particular file system object is deemed valid, the target memory 30 can be used to provide a response to a corresponding client request 11, para. 0038); and if the requested first data object is not available from the second networked data storage server, providing the requested first data object, using the proxy, to the second networked data storage server (determining whether the file handle specified in the given request identifies a first file system object on the source file system and a second file system object that is not the counterpart to the first file system object stored in the target memory; in response to determining that the file handle specified in the given request identifies the first file system object on the source file system and the second file system object in the target memory, the server removes the second file system object from the target memory and stores, in the target memory, the first file system object in association with the file handle specified in the given request, para. 0018), thereby migrating the first data object from the first networked data storage server to the second networked data storage server (data migration may be implemented and in progress, so that data is transferred from the source filer 102 to the destination filer 104 while the clients 101 continue to issue requests from the source filer; data migration system 100 forwards client requests to the destination filer 104, para. 0146). Bread does not explicitly teach wherein the first networked data storage server is a server operated by a provider that imposes constraints on usage of the first networked data storage server; However, Prahlad teaches wherein the first networked data storage server is a server operated by a provider that imposes constraints on usage of the first networked data storage server (a client computer or organization may contract with a cloud storage provider for a defined level of service, where the level of service relates to a storage policy as defined herein (e.g. aggregated data storage volumes, fault tolerance, data recovery rates, threshold latency and/or bandwidth, etc., defined under a service level agreement (SLA), para. 0068; a storage policy may comprise a provisioning policy. A provisioning policy is a set of preferences, priorities, rules and/or criteria that specify how various clients 130 (or groups of clients 130, e.g., a group of clients 130 associated with a department) may utilize various system resources, including resources such as available storage on cloud storage sites 115A-N and/or the network bandwidth between the storage operation cell 150 and cloud storage sites 115A-N, para. 0070) in order to improve data transfers as taught by Prahlad (para. 0065); Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the method of Beard in order to improve data transfers as taught by Prahlad (para. 0065). With respect to claim 2, Bread teaches The method of claim 1, wherein determining whether the requested first data object is available from the second networked data storage server comprises executing an asynchronous query (the data migration system 100 can implement processes that initially populate the destination filer 104 asynchronously, while the clients actively use the source filer 102; moreover, file system operations communicated from the clients 101 can be implemented asynchronously at the destination filer 104, para. 0049). With respect to claim 3, Bread teaches The method of claim 1, wherein determining whether the requested first data object is available from the second networked data storage server comprises reading from a record of data object availability created by the proxy in response to prior requests (scanning the file system objects of the source filer 102; issuing requests to the source filer 102 for purpose of scanning the source filer; the attributes for individual file system objects can used to determine whether the particular file system object had previously been migrated to the destination filer 104; if the data migration system 100 has not acquired the attributes for a file system object, then the object deemed as being non-migrated or newly discovered; once identified, the attribute for each such file system object is retrieved, para. 0108). With respect to claim 4, Bread teaches The method of claim 1, wherein determining whether the requested first data object is available from the second networked data storage server comprises: performing a multi-stage check using a cache (the source cache engine 132 can procure and cache the attributes of the source filer 102. When the attributes are acquired for a given OID node 131 such as replication engine 124 issues GetAttr request, the request can made to the source cache engine, para. 0066); and executing a query (the source cache engine 132 can procure and cache the attributes of the source filer 102. When the attributes are acquired for a given OID node 131 such as replication engine 124 issues GetAttr request, the request can made to the source cache engine, para. 0066). With respect to claim 5, Bread teaches The method of claim 1, wherein the first data object request is received according to a first protocol (the file system server 110 receives and processes NFS (version 3) packets issued from clients 101; other file system protocols can also be accommodated; the file system server 110 can include logical components that summarize the protocol-specific request (e.g. , NFS request) before processing the request in a protocol-agnostic manner, para. 0050), retrieving the first data object from the second networked data storage server is performed according to a second protocol (when a particular file system object is deemed valid, the target memory 30 can be used to provide a response to a corresponding client request 11, para. 0038; when the migration has progressed to the point that the destination filer 104 provides the responses to the client requests 111, the mapper 160 can translate the attributes of a file system object retrieved from the destination filer 104, para. 0072), and retrieving the requested first data object from the first networked data storage server is performed according to a third protocol (scanning the file system objects of the source filer 102; issuing requests to the source filer 102 for purpose of scanning the source filer; the attributes for individual file system objects can used to determine whether the particular file system object had previously been migrated to the destination filer 104; if the data migration system 100 has not acquired the attributes for a file system object, then the object deemed as being non-migrated or newly discovered; once identified, the attribute for each such file system object is retrieved, para. 0108). With respect to claim 7, Beard teaches The method of claim 1, further comprising: caching the requested first data object at the proxy as a cached copy (the source cache engine 132 can cache file system objects on discovery, and subsequently identify those file system objects that are more frequently requested, para. 0051); and providing the cached copy to the second networked data storage server as the requested first data object (the source cache engine 132 can procure and cache the attributes of the source filer 102; this offloads some of the load required from the source filer 102 during the migration process, para. 0066; the replication engine 124 can implement processes to replicate a file system object with the destination filer 104, para. 0067). With respect to claim 8, Beard teaches A method of data migration from a source networked data storage server to a target networked data storage server (providing a transparent virtualization of the source filer 102, so that the client terminals continue to issue requests for use of the source filer 102 for purpose of intercepting and proxying client/source filer exchanges, para. 0047), the method comprising: determining, using a proxy, a first protocol for interacting with the source networked data storage server (the file system server 110 receives and processes NFS (version 3) packets issued from clients 101; other file system protocols can also be accommodated; the file system server 110 can include logical components that summarize the protocol-specific request (e.g. , NFS request) before processing the request in a protocol-agnostic manner, para. 0050), determining, using the proxy, a second protocol for interacting with the target networked data storage server (each of the source and destination filers 102, 104 can correspond to a network- based file system, such as those that utilize a protocol such as NFS Version 3 or Version 4, para. 0044); receiving, from a client device using the first protocol, a first data object request requesting a first data object, wherein the first data object request is directed by the client device to the source networked data storage server (handling requests from individual clients for file system objects of the source file system; in handling requests, the one or more processors identify a file handle specified in a given request from one of the plurality of clients, and retrieve, from the source file system, a set of metadata associated with the specified file handle, para. 0019); obtaining the first data object from the source networked data storage server, using the first protocol (when a particular file system object is deemed valid, the target memory 30 can be used to provide a response to a corresponding client request 11, para. 0038); providing the first data object to the client device, using the first protocol (when a particular file system object is deemed valid, the target memory 30 can be used to provide a response to a corresponding client request 11, para. 0038); and providing the first data object to the target networked data storage server, using the second protocol (determining whether the file handle specified in the given request identifies a first file system object on the source file system and a second file system object that is not the counterpart to the first file system object stored in the target memory; in response to determining that the file handle specified in the given request identifies the first file system object on the source file system and the second file system object in the target memory, the server removes the second file system object from the target memory and stores, in the target memory, the first file system object in association with the file handle specified in the given request, para. 0018). Beard does not explicitly teach wherein the source networked data storage server is a server operated by a provider that imposes constraints on usage of the source networked data storage server; However, Prahlad teaches wherein the source networked data storage server is a server operated by a provider that imposes constraints on usage of the source networked data storage server (a client computer or organization may contract with a cloud storage provider for a defined level of service, where the level of service relates to a storage policy as defined herein (e.g. aggregated data storage volumes, fault tolerance, data recovery rates, threshold latency and/or bandwidth, etc., defined under a service level agreement (SLA), para. 0068; a storage policy may comprise a provisioning policy. A provisioning policy is a set of preferences, priorities, rules and/or criteria that specify how various clients 130 (or groups of clients 130, e.g., a group of clients 130 associated with a department) may utilize various system resources, including resources such as available storage on cloud storage sites 115A-N and/or the network bandwidth between the storage operation cell 150 and cloud storage sites 115A-N, para. 0070) in order to improve data transfers as taught by Prahlad (para. 0065); Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the method of Beard in order to improve data transfers as taught by Prahlad (para. 0065). With respect to claim 9, Beard teaches The method of claim 8, further comprising caching the first data object at the proxy after obtaining the first data object from the source networked data storage server and before providing the first data object to the target networked data storage server (the source cache engine 132 can cache file system objects on discovery, and subsequently identify those file system objects that are more frequently requested, para. 0051; the source cache engine 132 can procure and cache the attributes of the source filer 102; this offloads some of the load required from the source filer 102 during the migration process, para. 0066; the replication engine 124 can implement processes to replicate a file system object with the destination filer 104, para. 0067). With respect to claim 10, Beard in view of Prahlad teaches The method of claim 8 as described above, Beard in view of Prahlad does not explicitly teach further comprising interacting with a plurality of source networked data storage servers to obtain client data. However, Prahlad teaches further comprising interacting with a plurality of source networked data storage servers to obtain client data (presents clients 130 and other system components with a unified name space, even if the system is storing data on multiple cloud storage sites 115, para. 0112) in order to improve data transfers as taught by Prahlad (para. 0065). Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the method of Beard in order to improve data transfers as taught by Prahlad (para. 0065). With respect to claim 11, Beard in view of Prahlad teaches The method of claim 8 as described above, Beard in view of Prahlad does not explicitly teach further comprising interacting with a plurality of target networked data storage servers to migrate client data to. However, Prahlad teaches further comprising interacting with a plurality of target networked data storage servers to migrate client data to (a cloud storage submodule may obviate the need for complex scripting or the addition of disparate cloud gateway appliances to write data to multiple cloud storage site targets, para. 0112) in order to improve data transfers as taught by Prahlad (para. 0065). Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the method of Beard in order to improve data transfers as taught by Prahlad (para. 0065). With respect to claim 12, Beard teaches The method of claim 8, wherein the first protocol and the second protocol are the same protocol (each of the source and destination filers 102, 104 can correspond to a network- based file system, such as those that utilize a protocol such as NFS Version 3 or Version 4, para. 0044). With respect to claim 13, Beard in view of Prahlad teaches The method of claim 8 as described above, Further, Prahlad teaches wherein the first protocol comprises first encryption steps, the second protocol comprises second encryption steps (when a system migrates or copies data to secondary storage, including secondary cloud storage, the system encrypt the data before or after a secondary copy or archival copy is created, para. 0373; user protect data starting from the source with in-stream encryption, and then extend encryption to data “at-rest”, para. 0393), and the first encryption steps are distinct from the second encryption steps (when a system migrates or copies data to secondary storage, including secondary cloud storage, the system encrypt the data before or after a secondary copy or archival copy is created, para. 0373; user protect data starting from the source with in-stream encryption, and then extend encryption to data “at-rest”, para. 0393), the method further comprising: using the first encryption steps to decrypt the first data object as received from the source networked data storage server (a cloud storage site API permit storing encrypted data belonging to a client on a cloud storage site, together with an encrypted version of the encryption key that was used to encrypt the encrypted data; a password would be required from the client in order to decrypt the encrypted version, para. 0114); and using the second encryption steps to encrypt the first data object prior to sending to the target networked data storage server (when a system migrates or copies data to secondary storage, including secondary cloud storage, the system encrypt the data before or after a secondary copy or archival copy is created, para. 0373;) in order to enhance the “at-rest” security of files stored within a cloud storage site 115A-N, by reducing the risk of unauthorized access to the files' content as taught by Prahlad (para. 0373). Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the method of Beard in order to enhance the “at-rest” security of files stored within a cloud storage site 115A-N, by reducing the risk of unauthorized access to the files' content as taught by Prahlad (para. 0373). With respect to claim 14, Beard in view of Prahlad teaches The method of claim 8 as described above, Further, Prahlad teaches further comprising: selecting the source networked data storage server from among a plurality of source networked data storage servers (the selection of cloud storage sites on the basis of actual performance, a storage manager 105, secondary storage computing devices 165 and/or other system components may track, log and/or analyze the performance achieved by cloud storage sites, para. 0068); and selecting the target networked data storage server from among a plurality of target networked data storage servers, wherein a selected target networked data storage server is selected based on data security requirements (the deduplication module 299 restrict the lookup to those cloud storage sites 115 that would satisfy storage policy parameters applicable to each block, such as class of storage used for the object such as data security associated with a particular cloud storage site, para. 0180) in order to improve data transfers as taught by Prahlad (para. 0065). Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the method of Beard in order to improve data transfers as taught by Prahlad (para. 0065). With respect to claim 16, Beard teaches A non-transitory computer-readable storage medium storing instructions (machine-readable medium, para. 0159), which when executed by at least one processor of a computer system (processor 1104 executing one or more sequences of one or more instructions contained in main memory 1106. Such instructions may be read into main memory 1106 from another machine-readable medium, such as storage device 1110, para. 0159), causes the computer system to: receive, from a client device, a first data object request requesting a first data object, wherein the first data object request is directed by the client device to a first networked data storage server (handling requests from individual clients for file system objects of the source file system; in handling requests, the one or more processors identify a file handle specified in a given request from one of the plurality of clients, and retrieve, from the source file system, a set of metadata associated with the specified file handle, para. 0019), determine whether the requested first data object is available from a second networked data storage server (determining whether the file handle specified in the given request identifies a first file system object on the source file system and a second file system object that is not the counterpart to the first file system object stored in the target memory; in response to determining that the file handle specified in the given request identifies the first file system object on the source file system and the second file system object in the target memory, the server removes the second file system object from the target memory and stores, in the target memory, the first file system object in association with the file handle specified in the given request, para. 0018); if the requested first data object is available from the second networked data storage server, retrieve the requested first data object from the second networked data storage server (when a particular file system object is deemed valid, the target memory 30 can be used to provide a response to a corresponding client request 11, para. 0038; when the migration has progressed to the point that the destination filer 104 provides the responses to the client requests 111, the mapper 160 can translate the attributes of a file system object retrieved from the destination filer 104, para. 0072); if the requested first data object is not available from the second networked data storage server, retrieve the requested first data object from the first networked data storage server (scanning the file system objects of the source filer 102; issuing requests to the source filer 102 for purpose of scanning the source filer; the attributes for individual file system objects can used to determine whether the particular file system object had previously been migrated to the destination filer 104; if the data migration system 100 has not acquired the attributes for a file system object, then the object deemed as being non-migrated or newly discovered; once identified, the attribute for each such file system object is retrieved, para. 0108); provide the requested first data object to the client device (when a particular file system object is deemed valid, the target memory 30 can be used to provide a response to a corresponding client request 11, para. 0038); and if the requested first data object is not available from the second networked data storage server, provide the requested first data object to the second networked data storage server (determining whether the file handle specified in the given request identifies a first file system object on the source file system and a second file system object that is not the counterpart to the first file system object stored in the target memory; in response to determining that the file handle specified in the given request identifies the first file system object on the source file system and the second file system object in the target memory, the server removes the second file system object from the target memory and stores, in the target memory, the first file system object in association with the file handle specified in the given request, para. 0018), thereby migrating the first data object from the first networked data storage server to the second networked data storage server (data migration may be implemented and in progress, so that data is transferred from the source filer 102 to the destination filer 104 while the clients 101 continue to issue requests from the source filer; data migration system 100 forwards client requests to the destination filer 104, para. 0146). Beard does not explicitly teach wherein the first networked data storage server is a server operated by a provider that imposes constraints on usage of the first networked data storage server; However, Prahlad teaches wherein the first networked data storage server is a server operated by a provider that imposes constraints on usage of the first networked data storage server (a client computer or organization may contract with a cloud storage provider for a defined level of service, where the level of service relates to a storage policy as defined herein (e.g. aggregated data storage volumes, fault tolerance, data recovery rates, threshold latency and/or bandwidth, etc., defined under a service level agreement (SLA), para. 0068; a storage policy may comprise a provisioning policy. A provisioning policy is a set of preferences, priorities, rules and/or criteria that specify how various clients 130 (or groups of clients 130, e.g., a group of clients 130 associated with a department) may utilize various system resources, including resources such as available storage on cloud storage sites 115A-N and/or the network bandwidth between the storage operation cell 150 and cloud storage sites 115A-N, para. 0070) in order to improve data transfers as taught by Prahlad (para. 0065); Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the medium of Beard in order to improve data transfers as taught by Prahlad (para. 0065). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Beard et al. (WO 2015/031540 A1), hereinafter referred to as Beard, in view of Prahlad et al. (US 2019/0179805 A1), hereinafter referred to as Prahlad, and further in view of Polak et al. (US 2021/0374072 A1), hereinafter referred to as Polak. With respect to claim 6, Beard teaches The method of claim 5, further comprising: providing the combined request to the second networked data storage server and/or the first networked data storage server (handling requests from individual clients for file system objects of the source file system; in handling requests, the one or more processors identify a file handle specified in a given request from one of the plurality of clients, and retrieve, from the source file system, a set of metadata associated with the specified file handle, para. 0019). Further, Prahlad teaches caching a multipart request from the client device (the callback layer 1750 traps commands to the cache 1644, where that command identifies certain blocks on a disk for access or modifications, and writes to the data structure the changed blocks, para. 0264) in order to improve data transfers as taught by Prahlad (para. 0065); Therefore, based on Bread in view of Prahlad, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Prahlad to the method of Beard in order to improve data transfers as taught by Prahlad (para. 0065). Beard in view of Prahlad does not explicitly teach caching if the first protocol provides for multipart request and the second protocol and/or the third protocol does not; and assembling a plurality of data parts from the multipart request into a single request to form a combined request; and providing the combined request. However, Polak teaches caching if the first protocol provides for multipart request and the second protocol and/or the third protocol does not (the access request formatted according to a protocol associated with a first data store; in the access request, an operation name accompanied by one or more attributes of the requested operation, para. 0028; the access request translated into a different format by the compatibility layer, thereby producing a translated access request; the different format represent a format expected by a second data store that stores the one or more records identified in the access request; the second data store associated with a different protocol that dictates how access requests are formatted, para. 0029); and assembling a plurality of data parts from the multipart request into a single request to form a combined request (the access request formatted according to a protocol associated with a first data store; in the access request, an operation name accompanied by one or more attributes of the requested operation, para. 0028; the access request translated into a different format by the compatibility layer, thereby producing a translated access request; the different format represent a format expected by a second data store that stores the one or more records identified in the access request; the second data store associated with a different protocol that dictates how access requests are formatted, para. 0029); and providing the combined request (the access request formatted according to a protocol associated with a first data store; in the access request, an operation name accompanied by one or more attributes of the requested operation, para. 0028; the access request translated into a different format by the compatibility layer, thereby producing a translated access request; the different format represent a format expected by a second data store that stores the one or more records identified in the access request; the second data store associated with a different protocol that dictates how access requests are formatted, para. 0029) in order to facilitate migration from a first data store to a second data store without the need to change the program code of client applications or client libraries that use the protocol of the first data store as taught by Polak (para. 0013). Therefore, based on Bread in view of Prahlad, and further in view of Polak, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Polak to the method of Beard in view of Prahlad in order to facilitate migration from a first data store to a second data store without the need to change the program code of client applications or client libraries that use the protocol of the first data store as taught by Polak (para. 0013). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Beard et al. (WO 2015/031540 A1), hereinafter referred to as Beard, in view of Prahlad et al. (US 2019/0179805 A1), hereinafter referred to as Prahlad, and further in view of Balasubramanian et al. (US 2018/0191599 A1), hereinafter referred to as Balasubramanian. With respect to claim 15, Beard in view of Prahlad teaches The method of claim 14 as described above, Beard in view of Prahlad does not explicitly teach wherein the data security requirements include jurisdictional data handling requirements. However, Balasubramanian teaches wherein the data security requirements include jurisdictional data handling requirements (capturing domain-related requirements, such as security requirements, compliance requirements, and jurisdictional requirements, para. 0055) in order to determine the effectiveness of enterprise cloud migration as taught by Balasubramanian (para. 0108). Therefore, based on Bread in view of Prahlad, and further in view of Balasubramanian, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Balasubramanian to the method of Beard in view of Prahlad in order to determine the effectiveness of enterprise cloud migration as taught by Balasubramanian (para. 0108). Response to Arguments Applicant’s arguments with respect to claims 1-16 have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAO HONG NGUYEN whose telephone number is (571)272-2666. The examiner can normally be reached on Monday-Friday 8AM-4:30PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOON H. HWANG can be reached on 571-272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.H.N/Examiner, Art Unit 2447 March 28, 2026 /JOON H HWANG/Supervisory Patent Examiner, Art Unit 2447
Read full office action

Prosecution Timeline

Feb 14, 2024
Application Filed
Jun 13, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Mar 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592901
SYSTEMS AND METHOD FOR EFFICIENT ROUTING BASED UPON IDENTIFIED SUBJECT MATTER
2y 5m to grant Granted Mar 31, 2026
Patent 12554460
Audio Streaming of Text-Based Articles from Newsfeeds
2y 5m to grant Granted Feb 17, 2026
Patent 12549625
MOBILITY-AWARE ITERATIVE SFC MIGRATION IN A DYNAMIC 5G EDGE ENVIRONMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12542837
DEVICES AND METHODS FOR REQUESTS PREDICTION
2y 5m to grant Granted Feb 03, 2026
Patent 12531807
METHOD AND APPARATUS FOR DYNAMIC AND EFFICIENT LOAD BALANCING IN MOBILE COMMUNICATION NETWORK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+37.9%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 301 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month