DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to amendments filed on February 4, 2026.
Claims 1-11 are pending.
Claims 1, 7 and 11 have been amended.
Response to Amendment
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-11 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Jain (US 8,935,317).
With respect to Claim 1, Jain discloses:
initiating a network communication session between the client terminal and the server, (communications framework that is employed to facilitate communications between client(s) and server(s), Column 18, lines 17-30) for delivery of streaming data that is output from the instance of the cloud based software application, from the server to the client terminal; (dynamically adapting to the a connected-device environment by adjusting which computation of the cloud application (delivery of streaming data) runs on the client device and which parts run on the middle (or intermediary) servicer and which parts run in the datacenter, Column 2, lines 34-41)
initiating execution of the instance of the cloud based software application at the server; (server hosting cloud applications in a datacenter, Column 3, lines 47-52; cloud application execution running on server and/or client side, Column 11, lines 28-34 )
subsequent to initiating the network communication session and initiating execution of the instance of the cloud based software application at the server,
receiving a data message identifying a software code instruction for execution, (The system 200 includes the request component 102 that receives the request 104 from a cloud application 202 (subsequent to initiating the network communication session and initiating execution of the instance of the cloud based software application at the server) to process workload 204 (software code instruction for execution), Column 10, lines 54-55) wherein said software code instruction is associated with functionality of the instance of the cloud based software application; (workload/computation (functionality of the instance of the cloud based software application) in a cloud application, Abstract, lines 1-4)
responsive to receiving the data message identifying the software code instruction for execution: (a request is received at a server from a client application of a client device for processing workload (software code instruction for execution), Column 10, lines 54-55)
selecting one of the client terminal and the server for execution of the software code instruction identified in the data message, (dynamically splitting the computation in an application between a client and servers in a datacenter, Abstract, lines 1-4; determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; The server application 108 can be one of multiple servers of a datacenter 118 to which the workload (computation) can be assigned and/or distributed, Column 4, lines 18-20) wherein the step of selecting one of the client terminal and the server is based on:
one or more rules for routing software code instructions to one or the other of the client terminal and the server for execution; (optimal partitioning can be based on energy consumption of the client, resource footprint of the client, network connectivity, and/or a service level agreement, computation costs of components running on the client device, memory footprint to run components on the client device, bandwidth needed based on the partitioning, power usage by the client device, end-to-end latency as a function of compute time and transmission latency, conservation of minimum battery life of the client and/or datacenter utilization where the server is sited, Column 5, lines 8-18)
and information representing hardware configurations or hardware systems required or preferred for execution of the software code instruction; (The optimal partitioning can be based on computation and storage costs of components running on the client device, memory footprint to run components on the client device, bandwidth needed based on the partitioning, power usage by the client device, end-to-end latency as a function of compute time and transmission latency, conservation of minimum battery life of client, and/or datacenter utilization where the server is sited, among other factors, Column 4, lines 50-59; The optimal partitioning can be based on energy consumption of the client device, resource footprint of the client device, data dependencies, network connectivity, and/or service level agreement, application characteristics, power or energy available at the client, size of the application objects, load in the datacenter, security and privacy concerns (e.g., cannot share all data on the client with the datacenter), computation, memory, storage, and communication characteristics of client devices, middle devices (systems), and servers in the datacenter, among other factors, Column 4, lines 24-32; see Figure 5; At 500, a request is received at a server from a client application of a client device for processing workload. At 502, resource availability information of the client device is received at the server to process the workload. At 504, components that include server components of the server and client components of the client application are partitioned based on the resource availability information of the client. (information representing hardware configuration or hardware systems required/preferred) At 506, the workload is processed using the components as partitioned, Column 10, lines 52-61; resources can be hardware and/or software capabilities, Column 2, lines 59-63)
and routing the software code instruction identified in the data message, for execution to the selected client terminal or selected server. (optimal placement of application components are computed, and sending a response message to the client to compute locally its assignment of application components and send the computed data for further processing to the datacenter, Column 3, line 34-44; the workload is processed using the components as partitioned (routing), Column 10, lines 52-61)
With respect to Claim 2, all the limitations of Claim 1 have been addressed above; and Jain further discloses:
wherein the routed software code instruction is executed at a processor within the selected client terminal or selected server. (see Figure 9; example computing system that executes optimized partitioning which includes processing unit(s) 904, Column 13, lines 19-45)
With respect to Claim 3, all the limitations of Claim 1 have been addressed above; and Jain further discloses:
wherein the step of selecting one of the client terminal and the server for execution of the software code instruction is implemented at either the client terminal or the server. (optimization framework, running on a central controller machine on the middle server or in the datacenter, computes the optimal placement (selecting) of application components (execution of the software code instructions), Column 3, lines 32-37)
With respect to Claim 4, all the limitations of Claim 1 have been addressed above; and Jain further discloses:
wherein: the client terminal is configured to implement a first set of software code instructions associated with functionality of the instance of the cloud based software application: (dynamically splitting the computation in an application, that is, which parts run on a client and which parts run on servers in a datacenter, Abstract, lines 1-4)
the server is configured to implement a second set of software code instructions associated with functionality of the Instance of the cloud based software application; (dynamically splitting the computation in an application, that is, which parts run on a client and which parts run on servers in a datacenter, Abstract, lines 1-4)
the second set of software code Instructions is distinct from the first set of software code instructions; (splitting the computation implies that the parts that run on the client and the parts that run on the servers are distinct, Abstract, lines 1-4)
and the step of selecting one of the client terminal and the server for execution of the software code instruction comprises:
determining which of the first set of software code instructions and the second set of software code instructions includes the software code instruction identified by the received data message; (determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; on receiving a client request, the optimal placement is actuated, Column 3, lines 37-44)
and responsive to:
the first set of software code instructions including the software code instruction identified by the received data message, selecting the client terminal for execution of the software code instruction; (determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; sending a response message to the client to compute locally its assignment of application components (the execution code for corresponding components can be pushed to the client or the client can used cached executable code from past executions or get it installed locally a priori), Column 3, lines 37-44)
or the second set of software code instructions including the software code instruction identified by the received data message, selecting the server for execution of the software code instruction. (determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; workload is processed using the components as partitioned (i.e. on the client, middle server or datacenter), Column 10, lines 57-61)
With respect to Claim 5, all the limitations of Claim 1 have been addressed above; and Jain further discloses:
wherein one or more of receiving the data message, (a request is received at a server from a client application of a client device for processing workload (software code instruction for execution), Column 10, lines 54-55) selecting one of the client terminal and the server for execution of the software code instruction, (determining the optimal partitioning of the client components and server components in the cloud application to process the workload based on client resources and server resources, Column 5, lines 1-7) and routing the software code instruction to the selected client terminal or selected server, (sending the response to the client which defines which of the client components to run locally against the workload, Column 5, lines 4-7) are implemented by a processor implemented software instruction routing layer within the client terminal. (see Figure 9; example computing system that executes optimized partitioning which includes processing unit(s) 904, Column 13, lines 19-45)
With respect to Claim 6, all the limitations of Claim 1 have been addressed above; and Jain further discloses:
wherein one or more of receiving the data message, (a request is received at a server from a client application of a client device for processing workload (software code instruction for execution), Column 10, lines 54-55) selecting one of the client terminal and the server for execution of the software code instruction, (determining the optimal partitioning of the client components and server components in the cloud application to process the workload based on client resources and server resources, Column 5, lines 1-7) and routing the software code instruction to the selected client terminal or selected server, (workload is processed (routing) using the components as partitioned between the client and server components, Column 10, lines 53-61) are implemented by a processor implemented software instruction routing layer within the server. (determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; see Figure 9; example computing system that executes optimized partitioning which includes processing unit(s) 904, Column 13, lines 19-45)
With respect to Claim 7, Jain discloses:
at least one processor, (see Figure 9; processing unit(s) 904)
and at least one non-transitory computer readable memory comprising one or more instructions that, when executed by the at least one processor, (see Figure 9; memory subsystem 906)
cause the processor to:
initiate a network communication session between the client terminal and the server, (communications framework that is employed to facilitate communications between client(s) and server(s), Column 18, lines 17-30) for delivery of streaming data that is output from the instance of the cloud based software application, from the server to the client terminal; (dynamically adapting to the a connected-device environment by adjusting which computation of the cloud application (delivery of streaming data) runs on the client device and which parts run on the middle (or intermediary) servicer and which parts run in the datacenter, Column 2, lines 34-41)
initiate execution of the instance of the cloud based software application at the server; (server hosting cloud applications in a datacenter, Column 3, lines 47-52; cloud application execution running on server and/or client side, Column 11, lines 28-34 )
subsequent to initiating the network communication session and initiating execution of the instance of the cloud based software application at the server,
receive a data message identifying a software code instruction for execution, (The system 200 includes the request component 102 that receives the request 104 from a cloud application 202 (subsequent to initiating the network communication session and initiating execution of the instance of the cloud based software application at the server) to process workload 204 (software code instruction for execution), Column 10, lines 54-55) wherein said software code instruction is associated with functionality of the instance of the cloud based software application; (workload/computation (functionality of the instance of the cloud based software application) in a cloud application, Abstract, lines 1-4)
responsive to receiving the data message identifying the software code instruction for execution: (a request is received at a server from a client application of a client device for processing workload (software code instruction for execution), Column 10, lines 54-55)
select one of the client terminal and the server for execution of the software code instruction identified in the data message, (dynamically splitting the computation in an application between a client and servers in a datacenter, Abstract, lines 1-4; determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; The server application 108 can be one of multiple servers of a datacenter 118 to which the workload (computation) can be assigned and/or distributed, Column 4, lines 18-20) wherein the step of selecting one of the client terminal and the server is based on:
one or more rules for routing software code instructions to one or the other of the client terminal and the server for execution; (optimal partitioning can be based on energy consumption of the client, resource footprint of the client, network connectivity, and/or a service level agreement, computation costs of components running on the client device, memory footprint to run components on the client device, bandwidth needed based on the partitioning, power usage by the client device, end-to-end latency as a function of compute time and transmission latency, conservation of minimum battery life of the client and/or datacenter utilization where the server is sited, Column 5, lines 8-18)
and information representing hardware configurations or hardware systems required or preferred for execution of the software code instruction; (The optimal partitioning can be based on computation and storage costs of components running on the client device, memory footprint to run components on the client device, bandwidth needed based on the partitioning, power usage by the client device, end-to-end latency as a function of compute time and transmission latency, conservation of minimum battery life of client, and/or datacenter utilization where the server is sited, among other factors, Column 4, lines 50-59; The optimal partitioning can be based on energy consumption of the client device, resource footprint of the client device, data dependencies, network connectivity, and/or service level agreement, application characteristics, power or energy available at the client, size of the application objects, load in the datacenter, security and privacy concerns (e.g., cannot share all data on the client with the datacenter), computation, memory, storage, and communication characteristics of client devices, middle devices (systems), and servers in the datacenter, among other factors, Column 4, lines 24-32; see Figure 5; At 500, a request is received at a server from a client application of a client device for processing workload. At 502, resource availability information of the client device is received at the server to process the workload. At 504, components that include server components of the server and client components of the client application are partitioned based on the resource availability information of the client. (information representing hardware configuration or hardware systems required/preferred) At 506, the workload is processed using the components as partitioned, Column 10, lines 52-61; resources can be hardware and/or software capabilities, Column 2, lines 59-63)
and route the software code instruction identified in the data message, for execution to the selected client terminal or selected server. (optimal placement of application components are computed, and sending a response message to the client to compute locally its assignment of application components and send the computed data for further processing to the datacenter, Column 3, line 34-44; the workload is processed using the components as partitioned (route), Column 10, lines 52-61)
With respect to Claim 8, all the limitations of Claim 7 have been addressed above; and Jain further discloses:
wherein the routed software code instruction is executed at a processor within the selected client terminal or selected server. (see Figure 9; example computing system that executes optimized partitioning which includes processing unit(s) 904, Column 13, lines 19-45)
With respect to Claim 9, all the limitations of Claim 7 have been addressed above; and Jain further discloses:
wherein the processor is located within either the client terminal or the server. (the computing system (client and/or server) implementing various aspects includes a computer having processing unit(s) 904, Column 13, lines 32-35)
With respect to Claim 10, all the limitations of Claim 7 have been addressed above; and Jain further discloses:
wherein:
the client terminal is configured to implement a first set of software code instructions associated with functionality of the instance of the cloud based software application; (dynamically splitting the computation in an application, that is, which parts run on a client and which parts run on servers in a datacenter, Abstract, lines 1-4)
the server is configured to implement a second set of software code instructions associated with functionality of the instance of the cloud based software application; (dynamically splitting the computation in an application, that is, which parts run on a client and which parts run on servers in a datacenter, Abstract, lines 1-4)
the second set of software code Instructions is distinct from the first set of software code instructions; (splitting the computation implies that the parts that run on the client and the parts that run on the servers are distinct, Abstract, lines 1-4)
and the step of selecting one of the client terminal and the server for execution of the software code instruction comprises:
determining which of the first set of software code instructions and the second set of software code instructions includes the software code instruction identified by the received data message; (determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; on receiving a client request, the optimal placement is actuated, Column 3, lines 37-44)
and responsive to:
the first set of software code instructions including the software code instruction identified by the received data message, selecting the client terminal for execution of the software code instruction; (determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; sending a response message to the client to compute locally its assignment of application components (the execution code for corresponding components can be pushed to the client or the client can used cached executable code from past executions or get it installed locally a priori), Column 3, lines 37-44)
or the second set of software code instructions including the software code instruction identified by the received data message, selecting the server for execution of the software code instruction. (determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; workload is processed using the components as partitioned (i.e. on the client, middle server or datacenter), Column 10, lines 57-61)
With respect to Claim 11, Jain discloses:
initiating a network communication session between the client terminal and the server, (communications framework that is employed to facilitate communications between client(s) and server(s), Column 18, lines 17-30) for delivery of streaming data that is output from the instance of the cloud based software application, from the server to the client terminal; (dynamically adapting to the a connected-device environment by adjusting which computation of the cloud application (delivery of streaming data) runs on the client device and which parts run on the middle (or intermediary) servicer and which parts run in the datacenter, Column 2, lines 34-41)
initiating execution of the instance of the cloud based software application at the server; (server hosting cloud applications in a datacenter, Column 3, lines 47-52; cloud application execution running on server and/or client side, Column 11, lines 28-34 )
subsequent to initiating the network communication session and initiating execution of the instance of the cloud based software application at the server,
receiving a data message identifying a software code instruction for execution, (The system 200 includes the request component 102 that receives the request 104 from a cloud application 202 (subsequent to initiating the network communication session and initiating execution of the instance of the cloud based software application at the server) to process workload 204 (software code instruction for execution), Column 10, lines 54-55) wherein said software code instruction is associated with functionality of the instance of the cloud based software application; (workload/computation (functionality of the instance of the cloud based software application) in a cloud application, Abstract, lines 1-4)
responsive to receiving the data message identifying the software code instruction for execution: (a request is received at a server from a client application of a client device for processing workload (software code instruction for execution), Column 10, lines 54-55)
selecting one of the client terminal and the server for execution of the software code instruction identified in the data message, (dynamically splitting the computation in an application between a client and servers in a datacenter, Abstract, lines 1-4; determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the datacenter), Column 3, lines 34-37; The server application 108 can be one of multiple servers of a datacenter 118 to which the workload (computation) can be assigned and/or distributed, Column 4, lines 18-20) wherein the step of selecting one of the client terminal and the server is based on:
one or more rules for routing software code instructions to one or the other of the client terminal and the server for execution; (optimal partitioning can be based on energy consumption of the client, resource footprint of the client, network connectivity, and/or a service level agreement, computation costs of components running on the client device, memory footprint to run components on the client device, bandwidth needed based on the partitioning, power usage by the client device, end-to-end latency as a function of compute time and transmission latency, conservation of minimum battery life of the client and/or datacenter utilization where the server is sited, Column 5, lines 8-18)
and information representing hardware configurations or hardware systems required or preferred for execution of the software code instruction; (The optimal partitioning can be based on computation and storage costs of components running on the client device, memory footprint to run components on the client device, bandwidth needed based on the partitioning, power usage by the client device, end-to-end latency as a function of compute time and transmission latency, conservation of minimum battery life of client, and/or datacenter utilization where the server is sited, among other factors, Column 4, lines 50-59; The optimal partitioning can be based on energy consumption of the client device, resource footprint of the client device, data dependencies, network connectivity, and/or service level agreement, application characteristics, power or energy available at the client, size of the application objects, load in the datacenter, security and privacy concerns (e.g., cannot share all data on the client with the datacenter), computation, memory, storage, and communication characteristics of client devices, middle devices (systems), and servers in the datacenter, among other factors, Column 4, lines 24-32; see Figure 5; At 500, a request is received at a server from a client application of a client device for processing workload. At 502, resource availability information of the client device is received at the server to process the workload. At 504, components that include server components of the server and client components of the client application are partitioned based on the resource availability information of the client. (information representing hardware configuration or hardware systems required/preferred) At 506, the workload is processed using the components as partitioned, Column 10, lines 52-61; resources can be hardware and/or software capabilities, Column 2, lines 59-63)
and routing the software code instruction identified in the data message, for execution to the selected client terminal or selected server. (optimal placement of application components are computed, and sending a response message to the client to compute locally its assignment of application components and send the computed data for further processing to the datacenter, Column 3, line 34-44; the workload is processed using the components as partitioned (routing), Column 10, lines 52-61)
Response to Arguments
Applicant’s arguments, see Page 8, filed February 4, 2026, with respect to the objection to claim 7 and the §112(a) rejection of claims 1, 7 and 11 have been fully considered and are persuasive. The objection of claim 7 and the §112(a) rejection of claims 1, 7 and 11 have been withdrawn.
Applicant's arguments filed February 4, 2026 with respect to the §103 rejections have been fully considered but they are not persuasive.
In the Remarks, Applicant argues:
Please draw the Examiner's attention to the combination of the following features in claims 1, 7 and 11:
receiving a data message identifying a software code instruction for execution wherein said software code instruction is associated with functionality of the instance of the cloud based software application;
responsive to receiving the data message identifying the software code instruction for execution:
selecting one of the client terminal and the server for execution of the software code instruction identified in the data message,
From the above, it can be seen that the received data message identifies a software code instruction, and that one of the client terminal and the server are selected for execution of the software code instruction identified in the data message.
The effect of this limitation is that it ensures that individual software code instructions are identified by data messages, and a routing decision for that specific software code instruction is taken - resulting in a granular and targeted approach to selective routing.
It is critical to understand that the claimed subject matter does not involve dividing or splitting the software code instruction that is identified in the data message, nor does it involve selectively routing application components for execution to a server or client terminal. Instead the limitation requires that either one of the client terminal and the server are selected on a per data message / per software code instruction basis, for execution - and that the identified software code instruction is then routed for execution to that selected client terminal or server.
In concluding that the combination of Jain and Russell render claims 1, 7 and 11 unpatentable under 35 USC§ 103, the Examiner relies on the following disclosures at Jain:
Jain @ Col. 10, lines 54 to 55 - which teaches "a request is received at a server from a client application of a client device for processing workload" - the Examiner equates this disclosure with the claimed limitation of receiving a data message identifying a software code instruction for execution. In other words, the Examiner equates the processing workload of Jain with the software code instruction for execution as recited in claims 1, 7 and 11.
Jain @ Abstract, lines 1 - 4 - which teaches "dynamically splitting the computation in an application between a client and servers in a data center", and Col.3, lines 34 to 37 - which teaches "determining optimal placement of application components (assigning to either the client, middle server or a server hosted in the data center" - the Examiner equates these teachings with the claimed limitation of selecting one of the client terminal and the server for execution of the software code instruction identified in the data message.
The Applicant believes that the Examiner's conclusions are incorrect for the following reasons.
Since the Examiner relies on Jain's teaching of a request is received at a server from a client application of a client device for processing workload as being equivalent to receiving a data message identifying a software instruction, the Examiner therefore equates the claimed limitation of a software instruction identified within a data message with the processing workload taught by Jain.
Therefore, for Jain to be sufficient to teach the next limitation of claim 1, it would need to describe selecting either one of the client terminal or the server for executing the processing workload. However, Jain does not do this. To the contrary, Jain (see Abstract, lines 1 to 4) teaches that the processing workload is split between the client and the server.
Referring to the teachings of Jain at Col.3, lines 34 to 37, the prior art reference at best teaches that application components can be assigned to of a client terminal, middle server or a server).
This cited portion of Jain describes partitioning the entire workload into components that are executable by the client terminal and components that are executable at the server. It does not teach or suggest that this partitioning happens each time a new instruction is identified within a received data message. It will be understood that the batch based partitioning arrangement of Jain is therefore less efficient, as it involves a one-time consideration of device parameters and a consequent one-time partitioning. Thus, in the solution offered by Jain, if the device parameters subsequently change, the partitioning decisions taken based on the earlier detected device parameters may no longer be the most efficient option.
In summary therefore, there is no disclosure anywhere in Jain that the application component that is being assigned is identified in a received data message - and that the system responds to receiving said data message by selectively assigning said application component to one of a client terminal or a server. At best, Jain could be understood to disclose splitting of workloads identified in a data message between a client terminal and a server - which is different from selectively routing the entire software instruction that is identified in a data message to one of a client terminal and a server.
Examiner’s Response:
The Examiner respectfully disagrees. Applicant appears to argue that only one of the client terminal and the server are selected for execution of the software code instruction identified in the data message and that the claimed subject matter does not involve dividing or splitting the software code instruction that is identified in the data message (see Page 10 and 11 of Remarks). However, the current claim language does not preclude the interpretation that both could be selected as selecting both would include the selection of one of the client terminal and server. Even if the Applicant claims that only one of the client terminal and the server are selected, Jain appears to contemplate this scenario as well. The splitting of the computation in the application to all client or all server can depend on the application characteristics and/or network connectivity (see Abstract, lines 4-10). If the computation/workload has security or privacy concerns (i.e. the client does not want to share any data), then the computation/workload would be run exclusively on the client. Also, if there is no available power/energy of the client to perform the computation/workload, then the server would have to perform the computation/workload exclusively.
Further, Applicant argues that Jain “does not teach or suggest that this partitioning happens each time a new instruction is identified within a received data message.” However, it is the Examiner’s position that Jain discloses that the optimal partitioning happens on a per-request basis (see Column 4, lines 32-33). A per-request basis can be reasonably interpreted as “each time a new instruction is identified within a received data message”.
Further still, Applicant argues that “there is no disclosure anywhere in Jain that the application component that is being assigned is identified in a received data message - and that the system responds to receiving said data message by selectively assigning said application component to one of a client terminal or a server”. However, Jain discloses that the workload/computation (software code instruction) in the request (received data message) is assigned/distributed to either the client or server for processing (see Column 4, lines 18-20). Further, Jain discloses “incorporating estimated availability and utilization of servers in the cloud to offload computational load on the clients” (see Column 3, lines 55-58). This citation discloses that, based on estimated availability and utilization of servers, certain computational load (software code instructions) are offloaded (selectively assigned) to clients (client terminal). Further still, Jain discloses “adjusting which computation of the application runs on the client device, which parts run on the middle (or intermediary) server, and which parts run in the datacenter” (Column 2, lines 38-41). Therefore, these disclosures teach selectively running a computation on either a client, server or datacenter.
Further still, Applicant argues that Jain does not disclose “selectively routing the entire software instruction that is identified in a data message to one of a client terminal and a server”. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., selectively routing the entire software instruction) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The currently claim language does necessitate the interpretation that the entire software code instruction is selectively routed to a client terminal or a server.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LANNY N UNG whose telephone number is (571)270-7708. The examiner can normally be reached Mon-Thurs 6am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LANNY N UNG/ Examiner, Art Unit 2197