Prosecution Insights
Last updated: April 19, 2026
Application No. 17/246,445

System and Method for Consumerizing Cloud Computing

Final Rejection §103
Filed
Apr 30, 2021
Examiner
KAMRAN, MEHRAN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Marvell Asia Pte. Ltd.
OA Round
6 (Final)
90%
Grant Probability
Favorable
7-8
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
434 granted / 484 resolved
+34.7% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
510
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 484 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the amendment filed 12/11/2025. Claims 1-50 are pending in this application. Claims 1, 22 and 43-45 are independent claims. This Office Action is made final. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5, 22, 24,26, 27, 43-45, 47 and 49 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1). As per claim 1, Khalid teaches A system for cloud computing, the system comprising: a cloud job manager configured to manage an end user request to perform a computational job via cloud computing, the end user request received from an end user device, the cloud job manager further configured to manage the end user request by: (Khalid [0013] FIG. 1 illustrates an exemplary edge compute system 100 (“system 100”) [cloud job manager] configured in accordance with methods and systems described herein. As shown, system 100 may include, without limitation, a cloud server 102, an edge computing device 104, and a client device 106 communicatively coupled one to another by way of a network 108. Cloud server 102 and edge computing device 104 may operate in tandem to provide a service 110 over network 108 for access by a client 112 implemented by client device 106. [0047] As mentioned, in certain implementations, system 100 may be configured to determine at runtime (e.g., in response to a request that the component or task be performed) how a component or a task will be processed. Such a determination may be made in view of a designation of the component or task or regardless of a designation of the component or task. An element of system 100 may be configured to make the determination based on predefined factors that are designed to facilitate the component or task being assigned to the best available resource for processing (e.g., in order to facilitate optimization of service 110 and/or client 112, promote efficient use of resources, and/or provide a quality and/or improved user experience with service 110 or client 112). The element of system 100 may be configured to receive and analyze information about the predefined factors, and to determine how the component or task will be processed). Khalid does not teach (i) selecting, responsive to receipt of the end user request, a consumer device among a plurality of consumer devices to perform at least a portion of the computational job, the consumer device selected, at least in part, based on at least one characteristic of the consumer device and proximity of the consumer device to the end user device that transmitted the end user request to perform the computational job via cloud computing and (ii) assigning the at least a portion of the computational job to the consumer device selected by transmitting a job request to perform the at least a portion of the computational job, the job request transmitted to the consumer device selected. However, OHare teaches (i) selecting, responsive to receipt of the end user request, a consumer device among a plurality of consumer devices to perform at least a portion of the computational job, the consumer device selected, at least in part, based on at least one characteristic of the consumer device and proximity of the consumer device to the end user device that transmitted the end user request to perform the computational job via cloud computing (oHare [0006] FIG. 3 is a drawing of a user device presenting selectable options for offloading a computing task to other devices determined to be within range. [0016] FIG.1 is a drawing of mobile device 102 offloading a computing task 104 to an edge device, such as an Io T device 106. The operating system 108 in the mobile device 102 may be enabled with a framework, as disclosed herein, termed offload computing protocol (OCP). The OCP may outsource 110 the computing task 104 from more constrained devices such as wearables, smart phones, tablets, and others, to proximate devices, such as the IoT device 106, that are less constrained relative to the offloading device. The Io T device 106 may execute the computing task 104 and return 112 the results 114 to the mobile device 102. [0023] If the personal smart phone 202 is equipped with an OCP enabled mobile operating system, as described with respect to FIG. 1, it may offload some of the computing load to nearby devices. The offloading may depend upon the computing capability of the nearby devices as well as the amount of power needed to transfer the data and code to the nearby devices. For example, the personal smart phone 202 may be able to use the computing capability in the nearby IoT device 210, transferring data and code over a Bluetooth communication 204. [0035] The radio transceivers 430 may send out a radio message 432 to determine if there are devices present in the vicinity of the mobile device 102. The radio message 432 may include queries to determine if an OCP enabled IoT device 106 is present. The OCP enabled IoT device 106 may send an acknowledgment 434 of its presence back to the radio transceivers 430. Multiple OCP enabled IoT devices may be located and enumerated by this technique. The radio transceivers 430 may then return a message 436 to the proximity sensors 426 indicating that OCP devices are present. Similarly, the proximity sensors 426 may then return a message 438 to the OCP stack 422 indicating that OCP devices are present.) In the combination, the request is handled by the art of Khalid and in response to that request, the selection happens in Ohare. An alternative to that scenario is also presented below. The claim language itself says responsive to receipt of the end user request (the request handling is already taught by Khalid). The examiner presented an alternative rejection of the above limitation in the context of primary art of Khalid in the previous office action (The only difference being Khalid teaches the above limitation for one consumer device). This will be repeated here and needs to be considered. This is repeating what was in the last office action (with further clarification) and therefore does not constitute new grounds of rejection. (i) selecting, responsive to receipt of the end user request, a consumer device to perform at least a portion of the computational job, the consumer device selected, at least in part, based on at least one characteristic of the consumer device and proximity of the consumer device to the end user device that transmitted the end user request to perform the computational job via cloud computing (Khalid [0011] The improved speed may allow the client device to offload computing tasks that are latency sensitive and/or computationally intensive in a manner that may facilitate optimized performance of the application and/or improved user experience with the application. Examples of such optimizations are described herein. [0024] Edge computing device 104 may include one or more servers (e.g., one or more servers configured in blade server format) and/or other computing devices physically located at an edge of network 108, nearer, in terms of latency, to client device 106 than cloud server 102 is to client device 106. Edge computing device 104 may be configured to provide service 110, or one or more components or tasks of service 110, for access over network 108. With edge computing device 104 located at an edge of network 108, data communication latencies between edge computing device 104 and client device 106 generally will be lower than data communication latencies between cloud server 102 and client device 106, which may allow edge computing device 104 to provide service 110, or one or more components or tasks of service 110, to client 112 more quickly than cloud server 102 may do so. In certain examples, edge computing device 104 may be configured to provide a latency-sensitive component or task of service 110 to client device 106 to leverage the lower latency for data communications between edge computing device 104 and client device 106 [0025] The edge of network 108, as used herein, may include any location within network 108 that is nearer, in terms of latency, to client device 106 than cloud server 102 is to client device 106. In certain examples, the edge of network 108 may include a location at a base station or central office of a mobile wireless network at which edge computing device 104 (e.g., a mobile edge computing device) is implemented. Edge computing device 104 may be nearer than cloud server 102 to client device 106, in terms of latency, for one or more reasons. For instance, edge computing device 104 may be nearer to client device 106 in terms of geographical distance and/or network hops, which may cause latency of data communications between edge computing device 104 and client device 106 to be lower than latency of data communications between cloud server 102 and client device 106. For example, in certain implementations, edge computing device 104 may be one hop or two hops away from client device 106, while cloud server 102 is a greater number of hops away from client device 106 such that latency of data communications between edge computing device 104 and client device 106 is lower than latency of data communications between cloud server 102 and client device 106. (ii) assigning the at least a portion of the computational job to the consumer device selected by transmitting a job request to perform the at least a portion of the computational job, the job request transmitted to the consumer device selected. (OHare [claim 12] locating an offload computing protocol (OCP) capable device; determining a route to the OCP capable device; negotiating a session on the OCP capable device; sending code and data to the OCP capable device for processing; and receiving a result back from the OCP capable device. and [0065] Code 612 may be included to direct the processor 602 to negotiate a session on the least expensive computing device and route. The session may include the remote device performing a computational task for local device, for example, using code and data transferred from the local device. Code 614 may be included to direct the processor 602 to send the code and data to the remote device for processing. Code 616 may be included to direct the processor 602 to receive the results of the processing from the remote device. Code 618 may be included to direct the processor 602 to present the results of the calculation to the user. [0087] Example 22 includes the subject matter of any of examples 16 to 21. In example 22, sending the code and data comprises creating an OCP bundle, and sending the OCP bundle to the OCP capable device. The OCP bundle comprises a return IP address for the result, the code, the data, and a return type for the result.) As pointed out in the previous office action Khalid also teaches assignment of tasks (see paragraphs 10 and 51). This needs to be considered as an alternative rejection of the above limitation. It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine OHare with the system of Khalid to select and assign to an edge device. One having ordinary skill in the art would have been motivated to use OHare into the system of Khalid for the purpose of outsourcing tasks from more constrained devices to a proximate device (OHare paragraph 16) As per claim 3, Khalid teaches wherein the consumer device selected is a smart phone, tablet, laptop computer, desktop computer, or other portable or non-portable programmable consumer electronic device capable of computation as well as receiving and sending data via a network. (Khalid [0044] In certain implementations, components and/or tasks of client 112 or service 110 may be designated to be processed in certain ways within system 100. For example, a component or a task may be designated to be processed by client 112 at client device 106, by speed layer 116, or by batch layer 114. Client 112, speed layer 116, and batch layer 114 may be configured to identify such a designation and process the component or task accordingly. If the component or task is designated to be processed by client 112, client 112 may direct execution of the component or task to be performed at client device 106. If the component or task is designated to be processed by speed layer 116, client 112 may request execution of the component or task by speed layer 116, such as by sending requests to edge computing device 104 over network 108.) As per claim 5, Khalid teaches wherein the at least one characteristic includes device health information, device capability information, or a combination thereof. (Khalid [0047] statuses of resources at speed layer 116, latencies of processing and/or internal data communications within client device 106 (e.g., latency to send a task to a GPU of client device 106), [0064] In certain implementations, speed layer 116 may provide distributed resources (e.g., GPUs) at strategic locations, which may provide a shared, resource accelerated (e.g., a GPU accelerated) compute platform that can be provided as a service, such as a software as a service (SaaS), an infrastructure as a service (IaaS), and/or a platform as a service (PaaS) model. To this end, speed layer 116 may be implemented to include scripts (e.g., Python script) with engine logic for performing one or more of the speed layer operations described herein. Speed layer 116 may include or access one or more resources such as GPUs located at the edge of network 108. In certain examples, speed layer 116 may implement a software layer (e.g., CUDA provided by NVIDIA Corporation) that provides direct access to a GPU's virtual instruction set and parallel computing elements for execution of compute kernels.) As to claims 22 and 43-45, they are rejected based on the same reason as claim 1. As to claims 24 and 47, they are rejected based on the same reason as claim 3. As to claims 26 and 49, they are rejected based on the same reason as claim 5. Claims 2, 23 and 46 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Nookula (US 2019/0220783 A1). As per claim 2, Khalid teaches wherein the consumer device selected meets respective criterion for the at least one characteristic and is geographically located closest to the end user device relative to any other consumer device of the plurality of consumer devices meeting the respective criterion. (Khalid [0024] Edge computing device 104 may include one or more servers (e.g., one or more servers configured in blade server format) and/or other computing devices physically located at an edge of network 108, nearer, in terms of latency, to client device 106 than cloud server 102 is to client device 106). Khalid and OHare do not teach wherein the consumer device is selected from among a plurality of consumer devices under lease agreement for use by the system. However, Nookula teaches wherein the consumer device is selected from among a plurality of consumer devices under lease agreement for use by the system (Nookula [0062] Referring to FIG. 6, at least some networks in which embodiments may be implemented may include hardware virtualization technology that enables multiple operating systems to run concurrently on a host computer (e.g., hosts 620A and 620B of FIG. 6), i.e. as virtual machines (VMs) 624 on the hosts 620. The VMs 624 may, for example, be executed in slots on the hosts 620 that are rented or leased to customers of a network provider) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Nookula with the system of Khalid and OHare to rent or lease devices. One having ordinary skill in the art would have been motivated to use Nookula into the system of Khalid and OHare for the purpose of locating an edge device for deployment (Nookula paragraph 15). As to claims 23 and 46, they are rejected based on the same reason as claim 2. Claims 4, 25 and 48 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Merli (US 11,762,442 B1). As per claim 4, Khalid and OHare do not teach wherein the system is a cloud service provider system of a cloud service provider and wherein the consumer device selected is a low-power computing device lower in power usage relative to other computing devices available to the cloud service provider for use as a resource for cloud computing. However, Merli teaches wherein the system is a cloud service provider system of a cloud service provider and wherein the consumer device selected is a low-power computing device lower in power usage relative to other computing devices available to the cloud service provider for use as a resource for cloud computing. (Merli [col 69, lines 24-32] In various implementations, the message broker 2046 may input one or more data sets 2212 into ML model 2050. The ML model 2050 may be associated with an existing library of functions and/or other techniques to process data. In some implementations, such as when hub device 1804 is a low-power device and/or operating with limited processing or memory resources, the ML model 2050 may be a streaming ML model that performs lightweight computations and/or may be trained with incoming streaming data ) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Merli with the system of Khalid and OHare to use a low power computing device. One having ordinary skill in the art would have been motivated to use Merli into the system of Khalid and OHare for the purpose performing lightweight computation (Merli col 69, lines 30-32). As to claims 25 and 48, they are rejected based on the same reason as claim 4. Claims 6, 27 and 50 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view Zhao (US 2016/0066278 A1) and Holmes (US 2020/0384369 A1). As per claim 6, Khalid and OHare do not teach wherein the device health information includes resource utilization, battery power available, battery health, processor load, user application prioritization, data network speed, virtual machine (VM) availability, type of network connectivity, load sharing, device availability information derived from a user profile associated with the consumer device, or a combination thereof, and wherein the device capability information includes processor capability, storage capability, estimate for battery drain over time, other capability information derived from a make and model of the consumer device, or a combination thereof. However, Zhao teaches wherein the device health information includes resource utilization, battery power available, battery health, processor load, user application prioritization, data network speed, virtual machine (VM) availability, type of network connectivity, load sharing, device availability information derived from a user profile associated with the consumer device, or a combination thereof (Zhao [0030] It will be appreciated by one of skill in the art that meeting/battery usage table 400 may have been compiled by a distributor of battery manager 150. In practice, such a table may be summarized prior to storage in battery usage database 180 in order to increase the efficiency of step 320. For example, the installation process of battery manager 150 and/or battery usage database 180 may include designation of the manufacturer and model of mobile computing device 100, such that it may be sufficient to provide battery usage data for the specific model of device 100, i.e. for an iPhone 5 as per the example hereinabove. It will also be appreciated that the format and usage of meeting/battery usage table 400 is exemplary; the present invention may support any suitable format and/or method for documenting actual battery usage and the use thereof for estimating battery requirements for future scheduled tasks) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Zhao with the system of Khalid and OHare and Youn to check battery health. One having ordinary skill in the art would have been motivated to use Zhao into the system of Khalid and OHare and Youn for the purpose of monitoring battery consumption for scheduled tasks/events (Zhao paragraph 01) Khalid and Zhao do not teach wherein the device capability information includes processor capability, storage capability, estimate for battery drain over time, other capability information derived from a make and model of the consumer device, or a combination thereof. However, Holmes teaches wherein the device capability information includes processor capability, storage capability, estimate for battery drain over time, other capability information derived from a make and model of the consumer device, or a combination thereof. (Holmes [0014] where the client device(s) has enough storage capacity and/or processing capabilities, the generation and storage of highlights of the game stream may be executed by the client device(s)—thus offloading the storage and processing requirements from the streaming device(s) to the client device(s) to allow the streaming device(s) to maintain a high image quality) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Holmes with the system of Khalid and OHare and Zhao to check the capability of a device. One having ordinary skill in the art would have been motivated to use Holmes into the system of Khalid and OHare and Zhao for the purpose of allocation of compute resources in cloud gaming systems. (Holmes paragraph 03) As to claims 27 and 50, they are rejected based on the same reason as claim 6. Claims 7 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Youn (US 2010/0153960 A1). As per claim 7, Khalid and OHare do not teach wherein the cloud job manager is further configured to compute a profit estimate effected by use of the consumer device for performing the at least a portion of the computational job and select the consumer device based on the profit estimate computed. However, Youn teaches wherein the cloud job manager is further configured to compute a profit estimate effected by use of the consumer device for performing the at least a portion of the computational job and select the consumer device based on the profit estimate computed. (Youn [0036] Preferably, the task processing policy decision unit includes an expected completion time calculation unit for calculating for each resource in the grid, based on the resource state information, an expected completion time of the task; an expected profit calculation unit for calculating for each resource in the grid, based on the resource state information, an expected profit to be obtained by completing the task; an available resource cluster creation unit for creating an available resource cluster by using the expected execution time and the expected profit; and a task processing policy creation unit for determining, if the SLA information is satisfied by the available resource cluster, a task processing policy for executing the task by using at least one resource in the available resource cluster, wherein the available resource cluster is a set of resources having the expected completion time within the deadline and the expected profit being positive) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Youn with the system of Khalid and OHare to estimate the profit and compensation. One having ordinary skill in the art would have been motivated to use Youn into the system of Khalid and OHare for the purpose of implementing resource management in grid computing systems (Youn paragraph 01) As to claim 28, it is rejected based on the same reason as claim 7. Claims 8 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Tanaka (5,790,862) As per claim 8, Khalid and OHare do not teach compute a compensation amount to be paid for use of the consumer device, the compensation amount computed based on the profit estimate determined; and select the consumer device based on the compensation amount computed. However, Tanaka teaches compute a compensation amount to be paid for use of the consumer device, the compensation amount computed based on the profit estimate determined; and select the consumer device based on the compensation amount computed. (Tanaka [col 20, lines 23-25] FIGS. 11A, 11B show the constructions of the estimated profit calculator 36 and the resource classified cost calculator 39; [col 31, lines 53-63] As can be seen from FIG. 9, the assigning resource element determination unit 26 is made up of an assigning candidate resource element storage unit 31, an assignable resource element detection unit 32, a use cost calculator 34, a use cost storage unit 35, an estimated profit calculator 36, a profit storage unit 37, a resource element determination control unit 38, a resource classified cost calculator 39, a resource classified cost storage unit 40 and an evaluated resource storage unit 41. Here, FIG. 13 is a flowchart for the operation of the resource element determination control unit 38. [Col 34, lines 25-28] The resource classified cost calculator 39 calculates the lower order cost value in step b12 when there are a plurality of resources whose profit value calculated by the estimated profit calculator 36 is the highest.) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Tanaka with the system of Khalid and OHare to select the device based on compensation amount. One having ordinary skill in the art would have been motivated to use Tanaka into the system of Khalid and OHare for the purpose of calculating a profit for a resource (col 34, lines 25-28). As to claim 29, it is rejected based on the same reason as claim 8. Claims 9 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of He (US 2015/0205888 A1) As per claim 9, Khalid and OHare do not teach wherein the cloud job manager is coupled to the consumer device via a communications channel and wherein, to assign the at least a portion of the computational job to the consumer device selected, the cloud job manager is further configured to communicate, over the communications channel, with a client job manager of the consumer device selected, the client job manager configured to spawn at least one processing task on the consumer device selected, the at least one processing task configured to perform the at least a portion of the computational job. However, He teaches wherein the cloud job manager is coupled to the consumer device via a communications channel and wherein, to assign the at least a portion of the computational job to the consumer device selected, the cloud job manager is further configured to communicate, over the communications channel, with a client job manager of the consumer device selected, the client job manager configured to spawn at least one processing task on the consumer device selected, the at least one processing task configured to perform the at least a portion of the computational job. (He [0023] In some embodiments, for example, a global resource manager may be used to maintain the resource requirements for a job, and may query a local resource manager on each virtual node to determine if there are sufficient resources to run the job. The global resource manager may then build a global network resource table and start a master parallel job manager, which will in turn start a local parallel job manager on each virtual node, and these local managers may spawn the parallel tasks.) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine He with the system of Khalid and OHare to use a communication channel. One having ordinary skill in the art would have been motivated to use He into the system of Khalid and OHare for the purpose of implementing high performance computing (HPC) application environments (He paragraph 01) As to claim 30, it is rejected based on the same reason as claim 9. Claims 10 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Hatabe (US 2013/0219406 A1) As per claim 10, Khalid and OHare do not teach assign a job identifier (ID) to the computational job; assign a sub-job ID to the at least a portion of the computational job, the sub-job ID associated with the job ID; associate the sub job ID with a device ID associated with the consumer device selected; track progress of the computational job and associate an indicator of the progress tracked with the job ID. However, Hatabe teaches assign a job identifier (ID) to the computational job; assign a sub-job ID to the at least a portion of the computational job, the sub-job ID associated with the job ID; associate the sub job ID with a device ID associated with the consumer device selected; track progress of the computational job and associate an indicator of the progress tracked with the job ID.(Hatabe [0069] The divided data management table 120 includes a divided data ID 1201, a job ID 1202, a sub-job ID 1203, an execution server ID 1204, and a state 1205, as constituent items. [0072] The sub-job ID 1203 is information indicating an identifier of a corresponding sub-job.[0099] In addition, the job scheduling process program 1000 registers necessary information in the divided data management table (divided data management information) 120 (step S911). In other words, the job scheduling process program 1000 enters a divided data ID 1201, a corresponding job ID 1202, a corresponding sub-job ID 1203 of the job net information management table (job net information) 100, and an execution server ID which executes each sub-job [association between sub-job id and the device/server], in the divided data management table 120. [0127] If the job execution process (step S905 or step S912) finishes[tracking the progress], the job scheduling process program 1000 calls the data editing/dividing/display process program 1100 and displays the master data and a job result on a screen of a display device (not shown) (step S1201). For example, the data editing/dividing/display process program 1100 refers to the divided data management table 120 and the master data management table 140, and extracts therefrom the divided data ID 1401, the division key 1402, the update history 1403, the information 1404 indicating whether or not master data has been updated, the sub-job ID 1203, the execution server ID 1204, and the state 1205, which are displayed in a table form (refer to FIG. 13A).) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Hatabe with the system of Khalid and OHare to use a job identifier. One having ordinary skill in the art would have been motivated to use Hatabe into the system of Khalid and OHare for the purpose of controlling a job network. (Hatabe paragraph 01) As to claim 31, it is rejected based on the same reason as claim 10. Claims 11 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Hatabe (US 2013/0219406 A1) and Hosouchi (US 2012/0210323 A1) As per claim 11, Khalid and Hatabe do not teach comprising a data handler and data storage, the data handler coupled to the data storage and further coupled to the cloud job manager, the data handler configured to fetch data, corresponding to the sub-job ID, from the data storage and forward the data fetched to the cloud job manager, the cloud job manager further configured to transmit the data fetched to the consumer device selected. However, Hosouchi teaches comprising a data handler and data storage, the data handler coupled to the data storage and further coupled to the cloud job manager, the data handler configured to fetch data, corresponding to the sub-job ID, from the data storage and forward the data fetched to the cloud job manager, the cloud job manager further configured to transmit the data fetched to the consumer device selected. (Hosouchi [claim 1] a plurality of jobs which belong to a job net of the same system stored in the storage device and process the same data; a means for assigning data IDs for uniquely identifying pieces of data into which the data is split to associate the data IDs with the pieces of data, and for storing the data IDs in the storage device as job net information; and a means for sending a request to execute a sub-job together with a data ID of one of the pieces of data to a second one of the computers, the data which a first job of the plurality of jobs executes being replaced with the pieces of data, wherein the second computer includes: a means for receiving a termination state and the data ID of the sent sub-job, and wherein the first computer further includes: a means for memorizing, in the storage device, split data management information storing the data ID, the termination state, and a job identifier for uniquely identifying the first job corresponding to the sub-job within the job net, which are associated with each other; and a means for sending a request to execute a sub-job together with the data ID of one of the pieces of data to the second computer). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Hosouchi with the system of Khalid and OHare and Hatabe to fetch data from storage. One having ordinary skill in the art would have been motivated to use Hosouchi into the system of Khalid and OHare and Hatabe for the purpose of scheduling jobs of processing data. (Hosouchi paragraph 01) As to claim 32, it is rejected based on the same reason as claim 11. Claims 13 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1)) in further view of Butler (US 2022/0197773 A1) As per claim 13, Khalid and OHare do not teach wherein the cloud job manager is further configured to: monitor the proximity of the consumer device selected to the end user device; monitor health of the consumer device selected; and determine whether to offload the at least a portion of the computational job from the consumer device selected to another consumer device based on the proximity and health monitored. However, Butler teaches monitor the proximity of the consumer device selected to the end user device; (Butler [0421] In embodiments, the edge compute nodes 3036 may include or be part of an edge system 3035 (or edge network 3035). The edge compute nodes 3036 may also be referred to as “edge hosts 3036” or “edge servers 3036.” The edge system 3035 includes a collection of edge servers 3036 (e.g., MEC hosts/servers 3036-1 and 3036-2 of FIG. 31) and edge management systems (not shown by FIG. 30) necessary to run edge computing applications (e.g., MEC Apps 3136 of FIG. 31) within an operator network or a subset of an operator network. The edge servers 3036 are physical computer systems that may include an edge platform (e.g., MEC platform 3137 of FIG. 31) and/or virtualization infrastructure (e.g., VI 3138 of FIG. 31), and provide compute, storage, and network resources to edge computing applications. Each of the edge servers 3036 are disposed at an edge of a corresponding access network, and are arranged to provide computing resources and/or various services (e.g., computational task and/or workload offloading, cloud-computing capabilities, IT services, and other like resources and/or services as discussed herein) in relatively close proximity to intermediate nodes 3020 and/or endpoints 3010. The VI of the edge servers 3036 provide virtualized environments and virtualized resources for the edge hosts, and the edge computing applications may run as VMs and/or application containers on top of the VI. One example implementation of the edge system 3035 is a MEC system 3035, which is discussed in more detail infra with respect to FIG. 31) monitor health of the consumer device selected; and determine whether to offload the at least a portion of the computational job from the consumer device selected to another consumer device based on the proximity and health monitored. (Butler [0112] In various embodiments, for example, the edge node may detect a resource overload if the receive buffer is full, or if the receive buffer otherwise exceeds a memory utilization threshold (e.g., the percentage of the receive buffer's overall capacity that is currently being used exceeds a threshold). Alternatively, any other metric may also be used to detect when the edge node's resources have become overloaded. [0191] In some embodiments, for example, the infrastructure capacity plan is generated based on an infrastructure state graph. The infrastructure state graph identifies possible states of the computing infrastructure that could occur based on possible resource capacity allocation actions that could be performed over the various time slots of the relevant time window. For example, the infrastructure state graph may include nodes corresponding to the possible states of the computing infrastructure over the respective time slots, and edges corresponding to the possible resource capacity allocation actions that could be performed to transition among the possible states. In particular, each state (or node) may identify the current resource capacities and service placements on the computing infrastructure at a particular time slot based on the capacity planning action(s) (or edges) that have been performed. For example, each state may identify the used and available capacity on each resource, the requested capacity for each service, the current mappings of services to resources, and so forth). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Butler with the system of Khalid and OHare to monitor the health and proximity of the device. One having ordinary skill in the art would have been motivated to use Butler into the system of Khalid and OHare for the purpose of implementing automated resource management for distributed computing infrastructure. (Butler paragraph 02) As to claim 34, it is rejected based on the same reason as claim 13. Claims 14 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Butler (US 2022/0197773 A1) and Kattepur (US 2023/0171154 A1) As per claim 14, Khalid and Butler do not teach wherein, to monitor the proximity and health of the consumer device selected, the cloud job manager is further configured to communicate over a communications channel with a client job manager of the consumer device selected. However, Kattepur teaches wherein, to monitor the proximity and health of the consumer device selected, the cloud job manager is further configured to communicate over a communications channel with a client job manager of the consumer device selected. (Kattepur [0015] Since the coordination of offloading task/processing among computing devices such as mobile robots, edge/fog devices and the cloud is a complex problem, it is herein suggested an automated planning and scheduling technique. This may e.g. involve specifying a number of domains of interest including computing devices, communication channels, computation entities and time constraints) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Kattepur with the system of Khalid and OHare and Butler to communicate via a channel. One having ordinary skill in the art would have been motivated to use Kattepur into the system of Khalid and OHare and Butler for the purpose of handling operations in a communications network. (Kattepur paragraph 01) As to claim 35, it is rejected based on the same reason as claim 14. Claims 15 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Butler (US 2022/0197773 A1) and Zhao (US 2016/0066278 A1) As per claim 15, Khalid and Butler do not teach wherein the health of the consumer device includes battery health, processor load, user application prioritization, data network speed, availability of a virtual machine, Wireless Fidelity (Wi-Fi) or data network status, or a combination thereof. However, Zhao teaches wherein the health of the consumer device includes battery health, processor load, user application prioritization, data network speed, availability of a virtual machine, Wireless Fidelity (Wi-Fi) or data network status, or a combination thereof. (Zhao [0030] It will be appreciated by one of skill in the art that meeting/battery usage table 400 may have been compiled by a distributor of battery manager 150. In practice, such a table may be summarized prior to storage in battery usage database 180 in order to increase the efficiency of step 320. For example, the installation process of battery manager 150 and/or battery usage database 180 may include designation of the manufacturer and model of mobile computing device 100, such that it may be sufficient to provide battery usage data for the specific model of device 100, i.e. for an iPhone 5 as per the example hereinabove. It will also be appreciated that the format and usage of meeting/battery usage table 400 is exemplary; the present invention may support any suitable format and/or method for documenting actual battery usage and the use thereof for estimating battery requirements for future scheduled tasks) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Zhao with the system of Khalid and OHare and Butler to use a battery health status. One having ordinary skill in the art would have been motivated to use Zhao into the system of Khalid and OHare and Butler for the purpose of monitoring battery consumption for scheduled tasks/events (Zhao paragraph 01). As to claim 36, it is rejected based on the same reason as claim 15. Claims 16 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Youn (US 2010/0153960 A1) As per claim 16, Khalid and OHare do not teach wherein the cloud job manager is further configured to determine an amount of time for completing the at least a portion of the computational job and to select the consumer device based on a determination that the consumer device is capable of completing the at least a portion of the computational job in the time determined. However, Youn teaches wherein the cloud job manager is further configured to determine an amount of time for completing the at least a portion of the computational job and to select the consumer device based on a determination that the consumer device is capable of completing the at least a portion of the computational job in the time determined. (Youn [0036] Preferably, the task processing policy decision unit includes an expected completion time calculation unit for calculating for each resource in the grid, based on the resource state information, an expected completion time of the task; an expected profit calculation unit for calculating for each resource in the grid, based on the resource state information, an expected profit to be obtained by completing the task; an available resource cluster creation unit for creating an available resource cluster by using the expected execution time and the expected profit; and a task processing policy creation unit for determining, if the SLA information is satisfied by the available resource cluster, a task processing policy for executing the task by using at least one resource in the available resource cluster, wherein the available resource cluster is a set of resources having the expected completion time within the deadline and the expected profit being positive). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Youn with the system of Khalid and OHare to determine that the consumer device is capable of handling a portion of a job. One having ordinary skill in the art would have been motivated to use Youn into the system of Khalid and OHare for the purpose of resource management in grid computing systems (Youn paragraph 01) As to claim 37, it is rejected based on the same reason as claim 16. Claim 17, 18, 38 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Youn (US 2010/0153960 A1) and Zhao (US 2016/0066278 A1) As per claim 17, Khalid and Youn do not teach wherein the determination is based on movement of the consumer device, likelihood of call drops of a communications channel for communicating with the consumer device, availability of battery power of the consumer device, an estimate of time for the consumer device to complete the at least a portion of the computational job, an estimate of battery usage by the consumer device to complete the at least a portion of the computational job, or a combination thereof. However, Zhao teaches wherein the determination is based on movement of the consumer device, likelihood of call drops of a communications channel for communicating with the consumer device, availability of battery power of the consumer device, an estimate of time for the consumer device to complete the at least a portion of the computational job, an estimate of battery usage by the consumer device to complete the at least a portion of the computational job, or a combination thereof. (Zhao [0030] It will be appreciated by one of skill in the art that meeting/battery usage table 400 may have been compiled by a distributor of battery manager 150. In practice, such a table may be summarized prior to storage in battery usage database 180 in order to increase the efficiency of step 320. For example, the installation process of battery manager 150 and/or battery usage database 180 may include designation of the manufacturer and model of mobile computing device 100, such that it may be sufficient to provide battery usage data for the specific model of device 100, i.e. for an iPhone 5 as per the example hereinabove. It will also be appreciated that the format and usage of meeting/battery usage table 400 is exemplary; the present invention may support any suitable format and/or method for documenting actual battery usage and the use thereof for estimating battery requirements for future scheduled tasks) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Zhao with the system of Khalid and OHare and Youn to check battery health. One having ordinary skill in the art would have been motivated to use Zhao into the system of Khalid and OHare and Youn for the purpose of monitoring battery consumption for scheduled tasks/events (Zhao paragraph 01). As per claim 18, Khalid and Youn do not teach further comprising an information database, the information database including per-make-and-model consumer device battery characteristics, the cloud job manager further configured to compute the estimate of battery usage based on the per-make-and-model consumer device battery characteristics of the consumer device. However, Zhao teaches comprising an information database, the information database including per-make-and-model consumer device battery characteristics, the cloud job manager further configured to compute the estimate of battery usage based on the per-make-and-model consumer device battery characteristics of the consumer device. (Zhao [0030] It will be appreciated by one of skill in the art that meeting/battery usage table 400 may have been compiled by a distributor of battery manager 150. In practice, such a table may be summarized prior to storage in battery usage database 180 in order to increase the efficiency of step 320. For example, the installation process of battery manager 150 and/or battery usage database 180 may include designation of the manufacturer and model of mobile computing device 100, such that it may be sufficient to provide battery usage data for the specific model of device 100, i.e. for an iPhone 5 as per the example hereinabove. It will also be appreciated that the format and usage of meeting/battery usage table 400 is exemplary; the present invention may support any suitable format and/or method for documenting actual battery usage and the use thereof for estimating battery requirements for future scheduled tasks) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Zhao with the system of Khalid and OHare and Youn to check battery health. One having ordinary skill in the art would have been motivated to use Zhao into the system of Khalid and OHare and Youn for the purpose of for the purpose of monitoring battery consumption for scheduled tasks/events (Zhao paragraph 01). As to claim 38, it is rejected based on the same reason as claim 17. As to claim 39, it is rejected based on the same reason as claim 18. Claims 19 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Premnath (US 2017/0364136 A1) As per claim 19, Khalid and OHare do not teach wherein the cloud job manager is further configured to track usage parameters associated with implementing the computational job, the usage parameters including: per-device processor usage time used by the consumer device selected to perform the at least a portion of the computational job; per-process processor usage time used per-process executing on the consumer device selected to perform the at least a portion of the computational job; data network usage ; or a combination thereof. However, Premanth teaches wherein the cloud job manager is further configured to track usage parameters associated with implementing the computational job, the usage parameters including: per-device processor usage time used by the consumer device selected to perform the at least a portion of the computational job; per-process processor usage time used per-process executing on the consumer device selected to perform the at least a portion of the computational job; data network usage ; or a combination thereof. (Premnath [0080] In determination block 910, the processing device may determine whether a total workload, including the selected ready jobs, exceeds a total processor usage threshold. [0081] In response to determining that the total workload does not exceed the total processor usage threshold (i.e., determination block 910=“No”), the processing device may send approval of the request for permission to schedule the selected ready jobs to the scheduler in block 912. [0082] In response to determining that the total workload does exceed the total processor usage threshold (i.e., determination block 910=“Yes”), the processing device may select a combination of the ready jobs that reduce the total workload below the total processor usage threshold in optional block 914. The processing device may use the information of the request for permission to schedule the selected ready jobs to select the combination of the ready jobs. [0088] In determination block 1006, the processing device may determine whether the selected ready jobs together exceed a processor usage threshold. The processing device may compare estimated processor usage for execution of the ready jobs to the processor usage threshold to determine whether the ready jobs cumulatively exceed the processor usage threshold.) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Premnath with the system of Khalid and OHare to check total resource usage. One having ordinary skill in the art would have been motivated to use Premnath into the system of Khalid and OHare for the purpose of accepting or rejecting a scheduling request (Premanth paragraph 04). As to claim 40, it is rejected based on the same reason as claim 19. Claims 20 and 41 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1) in further view of Martineau (US 2019/0238347 A1) As per claim 20, Khalid do not teach wherein the cloud job manager is further configured to communicate in a secure manner with a client job manager of the consumer device selected, wherein the secure manner includes splitting data communicated there between into multiple sequences with respective sequence ID values assigned thereto and applying an encryption method to the multiple sequences, wherein the cloud job manager and client job manager form a pairing of job managers, and wherein the encryption method is limited to interpretation by the job managers in the pairing. However, Martineau teaches wherein the cloud job manager is further configured to communicate in a secure manner with a client job manager of the consumer device selected, wherein the secure manner includes splitting data communicated there between into multiple sequences with respective sequence ID values assigned thereto and applying an encryption method to the multiple sequences, wherein the cloud job manager and client job manager form a pairing of job managers, and wherein the encryption method is limited to interpretation by the job managers in the pairing. (Martineau [0012] For example, the secure hardware component may store one base key and the embedded software component may store multiple encrypted sequences with different authentication keys and different instructions that correspond to different subscribers or different mobile networks. [0031] As shown in FIG. 5, the method 500 may begin with the processing logic identifying a subscriber associated with a device (block 510). For example, the subscriber may be a user of a mobile network who seeks to authenticate the device with the mobile network. The processing logic may subsequently select an encrypted sequence from multiple encrypted sequences based on the identity of the subscriber (block 520). For example, the device may include multiple encrypted sequences where each encrypted sequence is assigned to a different subscriber of one or more mobile networks) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Martineau with the system of Khalid and OHare to establish secure communication. One having ordinary skill in the art would have been motivated to use Martineau into the system of Khalid and OHare for the purpose of authenticating a device in a distributed system (Martineau paragraph 09). As to claim 41, it is rejected based on the same reason as claim 20. Claims 21 and 42 are rejected under 35 U.S.C. 103 as being unpatentable over Khalid (US 2019/0208007 A1) in view of OHare (US 2018/0165131 A1)in further view of Yang (US 2010/0125758 A1) As per claim 21, Khalid do not teach wherein the cloud job manager is further configured to communicate with a client job manager, of the consumer device selected, to install a virtual operating system (OS) on the consumer device selected and wherein, to perform the at least a portion of the computational job, the client job manager is configured to spawn at least one first process on the virtual OS installed spawn at least one second process on a native OS on the consumer device, or spawn a combination thereof. However, Yang teaches wherein the cloud job manager is further configured to communicate with a client job manager, of the consumer device selected, to install a virtual operating system (OS) on the consumer device selected and wherein, to perform the at least a portion of the computational job, the client job manager is configured to spawn at least one first process on the virtual OS installed spawn at least one second process on a native OS on the consumer device, or spawn a combination thereof. (Yang [0046] The distributed system checker spawns and runs processes on top of their native OS) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Yang with the system of Khalid and OHare to install a virtual operating system. One having ordinary skill in the art would have been motivated to use Yang into the system of Khalid and OHare for the purpose of implementing a fault tolerant distributed system (Yang paragraph 02) As to claim 42, it is rejected based on the same reason as claim 21. Allowable Subject Matter Claims 12 and 33 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed on 12/11/2025 have been fully considered but they are not persuasive. Applicant’s arguments with respect to claims 1, 22, 43, 44, and 45 have been considered but they are not persuasive. The applicant has argued “Since O'Hare's OCP enabled device is not a "cloud job manager," there is no teaching, suggestion, or motivation in O'Hare that renders obvious any such actions performed by O'Hare's OCP enabled device be performed by a "cloud job manager," as required by Applicant's independent Claim 1 (emphasis added).”. First of all, none of the limitations taught by OHare have the word “cloud job manager” in them. So applicant’s statement “saying All words in a claim must be considered in judging the patentability of that against the prior art." See MPEP 2143.03 (emphasis added)” is irrelevant in this particular instance. OHare was simply not used to teach “cloud job manager”, this was already taught by Khalid. The applicant has not questioned the combination but made an argument that is not even consistent with the examiner’s mapping. Again to repeat “Khalid already teaches cloud job manager”. If every art was to teach every single limitation then what would be the point of the combination?. Also Khalid and Ohare are perfectly combinable because Ohare is happening in a cloud environment and this is mentioned several times (Ohare [0011] Further, an IoT device may be a virtual device, such as an application on a smart phone or other computing device. IoT devices may include IoT gateways, used to couple IoT devices to other IoT devices and to cloud applications, for data storage, process control, and the like). Also the examiner presented an alternative rejection based on just Khalid teaching this offloading to a single device and Ohare simply expanding this to multiple devices being selectable. This was not included in the response to the last office action. It is requested that this alternative rejection be considered and responded to in the next office action. The applicant has also argued that “Further, O'Hare describes that proximity is considered by an offloading device itself and that such proximity is between itself and a nearby device. O'Hare's proximity is not based on ''proximity of [a}[} device to [an} end user device that transmitted the end user request[} [to perform a computational job via cloud computing}, " as required by Applicant's independent Claim 1. This should be seen in conjunction with Khalid. Khalid has already handled the request, OHare simply finds the nearby device for offloading. The claims language itself it clear “responsive to receipt of the end user request”. The response is already handled by Khalid (and Khalid can offload this to a proximate device). All the examiner has to show is that this can be offloaded to multiple nearby devices. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The paths to allowance presented to the applicant still remain. The three distinct paths (besides the allowable subject matter indicated in this office action) were shared with the applicant’s representative last April. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAN KAMRAN whose telephone number is (571)272-3401. The examiner can normally be reached on 9-5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor April Blair, can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAN KAMRAN/Primary Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Apr 30, 2021
Application Filed
Feb 25, 2024
Non-Final Rejection — §103
May 29, 2024
Response Filed
Jul 18, 2024
Final Rejection — §103
Oct 24, 2024
Request for Continued Examination
Oct 28, 2024
Response after Non-Final Action
Nov 18, 2024
Non-Final Rejection — §103
Feb 21, 2025
Response Filed
May 14, 2025
Final Rejection — §103
Aug 19, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Sep 09, 2025
Non-Final Rejection — §103
Dec 11, 2025
Response Filed
Feb 22, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591444
Hardware Virtual Machine for Controlling Access to Physical Memory Space
2y 5m to grant Granted Mar 31, 2026
Patent 12585486
SYSTEMS AND METHODS FOR DEPLOYING A CONTAINERIZED NETWORK FUNCTION (CNF) BASED ON INFORMATION REGARDING THE CNF
2y 5m to grant Granted Mar 24, 2026
Patent 12585497
AMBIENT COOPERATIVE CANCELLATION WITH GREEN THREADS
2y 5m to grant Granted Mar 24, 2026
Patent 12572394
METHODS, SYSTEMS AND APPARATUS TO DYNAMICALLY FACILITATE BOUNDARYLESS, HIGH AVAILABILITY SYSTEM MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12561158
DEPLOYMENT OF A VIRTUALIZED SERVICE ON A CLOUD INFRASTRUCTURE BASED ON INTEROPERABILITY REQUIREMENTS BETWEEN SERVICE FUNCTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 484 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month