Prosecution Insights
Last updated: April 19, 2026
Application No. 18/155,007

DECENTRALIZED DATA CENTERS

Non-Final OA §103
Filed
Jan 16, 2023
Examiner
WON, MICHAEL YOUNG
Art Unit
2443
Tech Center
2400 — Computer Networks
Assignee
Cachengo Inc.
OA Round
9 (Non-Final)
80%
Grant Probability
Favorable
9-10
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
666 granted / 835 resolved
+21.8% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
28 currently pending
Career history
863
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
32.9%
-7.1% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 835 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the amendment filed October 23, 2025. 3. Claim 1-2, 11-12, and 20-21 have been amended. 4. Claims 1-5, 9-12, 15, 18-21, and 26-31 have been examine and remain pending. Response to Arguments 5. Applicant’s arguments, filed October 23, 2025, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Cho et al. (US 2009/0260008 A1), herein referenced Cho. Cho, has been cited to teach the missing limitations as newly amended, It is noted, the termination of a provisioned resource is well-known, routine, and conventional and does not add any newly inventive concept nor an improvement over the functions and operations of prior art, and therefore will not be the reason for an allowance. Please see rejections set forth below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-5, 7, 9-12, 15, 17-19, 20-25 are rejected under 35 U.S.C. 103 as being unpatentable over Guim Bernat et al. (US 2021/0144517 A1) in view of Ling (US 2002/0002538 A1) and Cho et al. (US 2009/0260008 A1). INDEPENDENT: As per claim 1, Guim Bernat teaches a decentralized computing arrangement comprising: a management system connectable to a wide area network (see Guim Bernat, Title: “MULTI-ENTITY RESOURCE, SECURITY, AND SERVICE MANAGEMENT IN EDGE COMPUTING DEPLOYMENTS”; [0106]: “… , it will be understood these examples may be applied to many other deployments of wide area and local wireless networks, as well as the integration of wired networks (including optical networks and associated fibers, transceivers, etc.)”; [0232]: “In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a wireless wide area network (WWAN) unit.”; and [0234]: “A wireless network transceiver 2266 (e.g., a radio transceiver) may be included to communicate with devices or services in the edge cloud 2290 via local or wide area network protocols.”), the management system comprising: one or more hardware processors that execute instructions (see Guim Bernat, [0218]: “In the illustrative example, the compute node 2200 includes or is embodied as a processor 2204 and a memory 2206. The processor 2204 may be embodied as any type of processor capable of performing the functions described herein (e.g., executing an application)”) to: provide a publish/subscribe messaging platform (see Guim Bernat, [0419]: “Further, an auditor, compliance entity, or third party may subscribe to EH values according to a publish-subscribe or other similar distributed messaging system such that the data flow, attestation flow, or other flow graph activity may be monitored, analyzed and inferenced as Edge telemetry or metadata.”; and [0538]: “The system 3100 uses a publish-subscribe or an information centric networking (ICN) configuration to allow updates to the resource to be applied uniformly to all cached copies simultaneously based on the policy that caches become subscribers to the tenant specific context topic. In an example, warm caches subscribe with high QoS requirements to ensure timely updates for resource access requests that occur locally.”); provide a node rental manager (see Guim Bernat, FIG. 34; Abstract: “Among other examples, various configurations and features enable the management of resources (e.g., controlling and orchestrating hardware, acceleration, network, processing resource usage), security (e.g., secure execution and communication, isolation, conflicts), and service management (e.g., orchestration, connectivity, workload coordination), in edge computing deployments, such as by a plurality of edge nodes of an edge computing environment configured for executing workloads from among multiple tenants”; [0004]: “offer or host a cloud-like distributed service, to offer orchestration and management for applications and coordinated service instances among many types of storage and compute resources”; [0148]: “A multi-tenant orchestrator may be used to perform key management, trust anchor management, and other security functions related to the provisioning”) that: causes presentation of a Graphical User Interface (see Guim Bernat, [0239]: “… A display or console hardware, in the context of the present system, may be used to provide output and receive input of an edge computing system; to manage components or services of an edge computing system; identify a state of an edge computing component or service; or to conduct any other number of management or administration functions or service use cases.”; and [0355]: “With language user interface (LUI) gaining ground as a more natural way of interfacing with the user, there will be more speech analytics applications that will touch us all. Chatbots is an example. The following provides examples of speech analytics in the (edge) cloud (server usages)”); detects a selection of a resource, via the Graphical User Interface (see Guim Bernat, [0004]: “Edge computing may, in some scenarios, offer or host a cloud-like distributed service, to offer orchestration and management for applications and coordinated service instances among many types of storage and compute resources; and [0168]: “The technical capabilities needed to support the discovery and use of may be baked into respective devices by a manufacturer, and an “onboarding”-type procedure may occur with each OaaS that the tenant selects and utilizes within the edge computing system.”); and updates a rental status of the selected resource (see Guim Bernat, [0691]: “Monetary or resource costs of such computations may be mapped to profiles of respective users, to be dependent on the actual current cost (or a bidding procedure as discussed above).”; and [0901]: “The virtual domains may include a set of trusted entities which are responsible to attest or validate any resource data that is changed at the domain (e.g., new resource, change on status, etc.).”); and provide a payment manager that manages billing for the selected resource (see Guim Bernat, [0135]: “The higher layer of the services hierarchy, Business Functional Services (BFS) are operable, and these services are the different elements of the service which have relationships to each other and provide specific functions for the customer… Payment history of user entity, Authorization of user entity of resource(s), etc.”; [0167]: “Further aspects of FaaS may enable deployment of edge functions in a service fashion, including a support of respective functions that support edge computing as a service (Edge-as-a-Service or “EaaS”). Additional features of FaaS may include: a granular billing component that enables customers (e.g., computer code developers) to pay only when their code gets executed”; [0186]: “4) Enabling of new edge processing use cases: For example, a service on the edge that allows biometry authentication. Or, a service which enables payment to be done real-time via voice analysis as long as the reliability requirements are met.”; and [0691]: “Monetary or resource costs of such computations may be mapped to profiles of respective users, to be dependent on the actual current cost (or a bidding procedure as discussed above).”); and a remote edge device decentralized from the management system (see Guim Bernat, [0002]: “Components that can perform edge computing operations (“edge nodes”) can reside in whatever location needed by the system architecture or ad hoc service (e.g., in a high performance compute data center or cloud installation; a designated edge node server, an enterprise server, a roadside server, a telecom central office; or a local or peer at-the-edge device being served consuming edge services)”; [0148]: “In further examples, an edge computing system is extended to provide for orchestration of multiple applications through the use of containers (a contained, deployable unit of software that provides code and needed dependencies) in a multi-owner, multi-tenant environment. A multi-tenant orchestrator may be used to perform key management, trust anchor management, and other security functions related to the provisioning and lifecycle of the trusted ‘slice’ concept in FIG. 16. For instance, an edge computing system may be configured to fulfill requests and responses for various client endpoints from multiple virtual edge instances (and, from a cloud or remote data center).”; and [1221]: “In some other aspects, a bandwidth management API may be present both at the access level edge 7304 and also in more remote edge locations, in order to set up transport networks (e.g., for Content Delivery Network (CDN)-based services)”), the remote edge device comprising: a messaging interface to receive, from the publish/subscribe messaging platform, a messages that controls installation of the selected resource on the remote edge device (see Guim Bernat, [0196]: “As result of the requirements 1720 for the invoked workload(s), a selection may be made for a particular configuration of a workload execution platform 1730. The configuration for the workload execution platform 1730 (e.g., configurations 1731, 1733, 1735, provided from hardware 1732, 1734, 1736) may be selected by identifying an execution platform from among multiple edge nodes (e.g., platforms 1 to N); by reconfiguring an execution platform within a configurable rack scale design system; or by reconfiguring an execution platform through pooling or combining resources from one or multiple platforms.”; [0538]: “The system 3100 uses a publish-subscribe or an information centric networking (ICN) configuration to allow updates to the resource to be applied uniformly to all cached copies simultaneously based on the policy that caches become subscribers to the tenant specific context topic. In an example, warm caches subscribe with high QoS requirements to ensure timely updates for resource access requests that occur locally.”; and [0610]: “Edge computing installations are expanding to support a variety of use cases, such as smart cities, augmented or virtual reality, assisted or autonomous driving, factory automation, and threat detection, among others. Some emerging uses includes supporting computation or data intensive applications, such as event triggered distributed functions. Proximity to base stations or network routers for devices producing the data is an important factor in expeditious processing. In some examples, these edge installations include pools of memory or storage resources to achieve real-time computation while performing high levels of summarization (e.g., aggregation) or filtering for further processing in backend clouds.”); and a wide area network interface to connect the remote edge device to the wide area network (see Guim Bernat, [0106]: “… , it will be understood these examples may be applied to many other deployments of wide area and local wireless networks, as well as the integration of wired networks (including optical networks and associated fibers, transceivers, etc.)”; [0232]: “In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a wireless wide area network (WWAN) unit.”; and [0234]: “A wireless network transceiver 2266 (e.g., a radio transceiver) may be included to communicate with devices or services in the edge cloud 2290 via local or wide area network protocols.”), the messaging interface configured to receive configuration data from the management system to install the resource, the resource being sent to the remote edge device via the wide area network (see Guim Bernat, [0088]: “The following embodiments generally relate to data processing, service management, resource allocation, compute management, network communication, application partitioning, and communication system implementations, and in particular, to techniques and configurations for adapting various edge computing devices and entities to dynamically support multiple entities (e.g., multiple tenants, users, stakeholders, service instances, applications, etc.) in a distributed edge computing environment.”; [0158]: “Additionally or alternatively, the cloud 1244 may represent a network such as the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), or a wireless wide area network (WWAN) including proprietary and/or enterprise networks for a company or organization, or combinations thereof.”; [0613]: “At the lowest layer, SDSi can provide services and guarantees to systems to ensure active adherence to contractually agreed-to service level specifications that a single resource has to provide within the system.”; and [1032]: “The instructions may include transformation policies 5716 that execute the transformation functions 5726 and invoke registration functions 5718 to configure necessary platform configuration options.”). Although Guim Bernat teaches a GUI, Guim Bernat does not explicitly teach that GUI identifies a plurality of application for rent, a description for each application of the plurality of applications, and a rental price for each application of the plurality if applications. Ling teaches identifying a plurality of application for rent, a description for each application of the plurality of applications, and a rental price for each application of the plurality if applications (see Ling, [0004], “Software products being offered by an ASP are typically displayed at the purchaser's client computer. The display may include a description of each software program and a price for the software. As the purchaser sends a request to purchase software programs to the ASP server, the server must interact with the client system to confirm the purchases and the payment method.”; and [0006]: “Since some software products are relatively expensive or use of a particular software product may become obsolete after a period or number of uses by a purchaser, the purchaser may want to rent the software product instead of purchasing it outright. Thus, the software may be rented for use for a certain period of time or for a certain number of uses. For example, it may be preferable to rent computer games rather than purchase them, since computer games often lose their interest and appeal after repeated playing. Additionally, a purchaser may wish to rent the use of a software program that is used only occasionally, such as a language translator or document clean-up or editing software. The rental of software thus provides users a relatively inexpensive and economic method to use software.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of Guim Bernat in view of Ling by implementing identifying a plurality of application for rent, a description for each application of the plurality of applications, and a rental price for each application of the plurality if applications. One would be motivated to do so because Guim Bernat teaches in paragraph [0148], “In the example of FIG. 8, an edge computing system 800 is extended to provide for orchestration of multiple applications through the use of containers (a contained, deployable unit of software that provides code and needed dependencies) in a multi-owner, multi-tenant environment, and because such information enables a client/customer to make a better informed decision on a purchase. Guim Bernat does not explicitly teach terminating a virtual machine that provides the selected application in response to a request to terminate rental of the selected application. Cho teaches terminating a virtual machine that provides the selected application in response to a request to terminate rental of the selected application (see Cho, [0048]: “On the other hand, if there is no response to the return request, the other virtual machine 120 terminates use of the processor in a forced way based on the allocation policy. In this case, the returned processor is allocated to the virtual machine 110 according to the waiting list and the other virtual machine 120 is added to the waiting list after the forced termination”; and [0050]: “If the return of the processor is to be done, the virtual machine monitor 130 terminates the use of the processor of the virtual machine 110 and adds the virtual machine 110 to the waiting list.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of Guim Bernat in view of by implementing terminating a virtual machine that provides the selected application in response to a request to terminate rental of the selected application. One would be motivated to do so because Guim Bernat teaches in paragraph Abstract, “Among other examples, various configurations and features enable the management of resources (e.g., controlling and orchestrating hardware, acceleration, network, processing resource usage), security (e.g., secure execution and communication, isolation, conflicts), and service management (e.g., orchestration, connectivity, workload coordination), in edge computing deployments, such as by a plurality of edge nodes of an edge computing environment configured for executing workloads from among multiple tenants”, and further teaches in paragraph [0570], “In an example, the method may include returning the resource by sending an indication to the one of the other orchestrators 3512, 3513, when use of the resource is complete within the region. Other changes or resource release/tear down actions may occur after returning the resource.”. As per claim 11, Guim Bernat, Ling, and Cho teach a non-transitory computer-readable medium comprising instructions, that when executed by one or more processors, causes the one or more processors to perform operations (see Guim Bernat, [0244]: “In an example, the instructions 2282 provided via the memory 2254, the storage 2258, or the processor 2252 may be embodied as a non-transitory, machine-readable medium 2260 including code to direct the processor 2252 to perform electronic operations in the edge computing node 2250. The processor 2252 may access the non-transitory, machine-readable medium 2260 over the interconnect 2256. For instance, the non-transitory, machine-readable medium 2260 may be embodied by devices described for the storage 2258 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices.”) comprising: providing, by a management system, a publish/subscribe messaging platform; causing, by the management system, presentation of a Graphical User Interface that identifies a plurality of applications for rent; detecting, by the management system, a selection of an application, of the plurality of applications, via the Graphical User Interface; receiving, at a remote edge device and from the publish/subscribe messaging platform, a message that controls installation of the selected application on the remote edge device; receiving, at the remote edge device, configuration data from the management system to install the application, the application being sent to the remote edge device via a wide area network; updating, by the management system, a rental status of the application; and managing, by the management system, billing for the application (see Claim 1 rejection above). As per claim 20, Guim Bernat, Ling, and Cho teach a method for installing a rented application on an edge device, the method comprising: providing, by a management system, a publish/subscribe messaging platform; causing, by the management system, presentation of a Graphical User Interface that identifies a plurality of applications for rent; detecting, by the management system, a selection of an application, of a plurality of applications, via the Graphical User Interface; receiving, at a remote edge device and from the publish/subscribe messaging platform, a message that controls installation of the selected application on the remote edge device; receiving, at the remote edge device, configuration data from the management system to install the application, the application being sent to the remote edge device via a wide area network; updating, by the management system, a rental status of the application; and managing, by the management system, billing for the application (see Claim 1 rejection above). DEPENDENT: As per claims 2, 12, and 21, which respectively depend on claims 1, 11, and 20, Guim further teaches wherein: the remote edge device hosts the virtual machines, the selected application runs on the virtual machine (see Guim Bernat, [0003]: “Applications that have been adapted for edge computing include but are not limited to virtualization of traditional network functions (e.g., to operate telecommunications or Internet services) and the introduction of next-generation features and services (e.g., to support 5G network services).”; and [0151]: “FIG. 9 illustrates various compute arrangements deploying containers in an edge computing system… or to separately execute containerized virtualized network functions through execution via compute nodes (923 in arrangement 920). This arrangement is adapted for use of multiple tenants in system arrangement 930 (using compute nodes 936), where containerized pods (e.g., pods 912), functions (e.g., functions 913, VNFs 922, 936), and functions-as-a-service instances (e.g., FaaS instance 915) are launched within virtual machines (e.g., VMs 934, 935 for tenants 932, 933) specific to respective tenants (aside the execution of virtualized network functions).”). As per claim 3, which depends on claim 1, Guim Bernat further teaches wherein the remote edge device is a rented device (see Guim Bernat, [0138]: “Moreover, any number of the edge computing architectures described herein may be adapted with service management features…. The primary objective of business level services is to attain the goals set by the customer according to the overall contract terms and conditions established between the customer and the provider at the agreed to financial agreement. BLS(s) are comprised of several Business Functional Services (BFS) and an overall SLA.”; [0154]: “an application or function (e.g., 1022 or 1023) operating at a specific distributed edge instance (thin edge 1021) may invoke GPU processing capabilities further in the edge cloud (offered by the large/medium edge instance 1030, in the form of a GPU-as-a-service 1030); or as another example, an application or function (e.g., 1025, 1026) at a client computer (client PC 1024) may invoke processing capabilities further in the edge cloud (offered by the offered by the large/medium edge instance 1030, in the form of cryptography-as-a-service 1035). Other applications, functions, functions-as-a-service, or accelerator-as-a-service (e.g., 1031, 1032, 1033, 1034) may be offered by the edge cloud (e.g., with compute 1036), coordinated or distributed among the edge nodes, and the like”; and [0167]: “Further aspects of FaaS may enable deployment of edge functions in a service fashion, including a support of respective functions that support edge computing as a service (Edge-as-a-Service or “EaaS”).”). As per claim 4, which depends on claim 1, Guim Bernat further teaches wherein the application on the remote edge device stores data suitable for performing analytics using machine learning (see Guim Bernat, [0211]: “Within the edge platform capabilities 2120, specific acceleration types may be configured or identified within features in order to ensure service density is satisfied across the edge cloud. Specifically, four primary acceleration types may be deployed in an edge cloud configuration: (1) General Acceleration (e.g., FPGAs) to implement basic blocks such as a Fast Fourier transform (FFT), k-nearest neighbors algorithm (KNN) and machine learning workloads”; and [0543]: “In an example, the edge node is a remote edge node from a local node initiating the request. The method concludes with an operation to provide access to a resource corresponding to the physical address 3214 on the edge node. In an example, the resource may include data stored at the physical address on the edge node, a service operating at the physical address 3214 on the edge node, or a location of the physical address 3214 on the edge node.”;). As per claims 5 and 15, which respectively depend on claims 1 and 11, Guim Bernat further teaches wherein the application performs object classification (see Guim Bernat, [0196]: “The respective type classifications may be associated with sets of requirements 1720, which may specify workload requirements 1721 for the particular classification (e.g., performance requirements, functional requirements), as compared with operator requirements or constraints 1722 (available number of platforms, form factors, power, etc.). As result of the requirements 1720 for the invoked workload(s), a selection may be made for a particular configuration of a workload execution platform 1730. The configuration for the workload execution platform 1730 (e.g., configurations 1731, 1733, 1735, provided from hardware 1732, 1734, 1736) may be selected by identifying an execution platform from among multiple edge nodes (e.g., platforms 1 to N)”) or runs a facial recognition inference engine (see Guim Bernat, [0289]: “Video analytics refers to performing live video analytics and video pre-processing or transcoding, at the edge for presenting to a user device. Traffic video analysis and alarm systems are examples of video analytics at the edge. Storage and compute resources are relied upon for this type of usage”; and [0290]: “Video analytics play an important role in many fields. For example, face recognition from traffic and security cameras is already playing an essential role in law and order. Several other type of analytics can be done on video contents such as object tracking, motion detection, event detection, flame and smoke detection, AT learning of patterns in live stream or archive of videos, etc.”). As per claim 9, which depends on claim 1, Guim Bernat further teaches wherein the remote edge device comprises a command interface that has a listener service sub-module to establish a connection to the messaging interface of the management system (see Guim Bernat, [0139]: “With these features, the policies can trigger the invocation of analytics and dashboard services at the edge ensuring continuous service availability at reduced fidelity or granularity. Once network transports are re-established, regular data collection, upload and analytics services can resume.”; [0243]: “The storage 2258 may include instructions 2282 in the form of software, firmware, or hardware commands to implement the techniques described herein”; [0312]: “The edge can host a central image recognition and object identification service which can be used by the AR apps. The AR apps specify the targets through an API and this service can respond with objects as desired when an input request is sent.”; and [0358]: “As another example, in the digital assistant domain, there are: voice messaging, voice search, voice dialing, voice memo, voice commands, voice navigate, and voice mail”). As per claims 10 and 19, which respectively depend on claims 1 and 11, Guim Bernat further teaches wherein a plurality of different applications is installed on the remote edge (see Guim Bernat, [0003]: “Applications that have been adapted for edge computing include but are not limited to virtualization of traditional network functions (e.g., to operate telecommunications or Internet services) and the introduction of next-generation features and services (e.g., to support 5G network services).”; [0004]: “Edge computing may, in some scenarios, offer or host a cloud-like distributed service, to offer orchestration and management for applications and coordinated service instances among many types of storage and compute resources”; [0610]: “Edge computing installations are expanding to support a variety of use cases, such as smart cities, augmented or virtual reality, assisted or autonomous driving, factory automation, and threat detection, among others. Some emerging uses includes supporting computation or data intensive applications, such as event triggered distributed functions. Proximity to base stations or network routers for devices producing the data is an important factor in expeditious processing. In some examples, these edge installations include pools of memory or storage resources to achieve real-time computation while performing high levels of summarization (e.g., aggregation) or filtering for further processing in backend clouds.”; [0613]: “To reiterate, exclusive reservations to premium services increase the need for installing large numbers of resources in edge data centers, as the non-premium services also have similar resource requirements”; [0696]: “As a result of this trust relationship, a TSP may be able to install a secure, trusted software module in those devices and into customer devices”; and [0697]: “TSPs may install trusted software modules in anchor point devices and, with customer's opt-in agreement, may also install additional modules in devices belonging to a customer.”). As per claim 18, which depends on claim 11, Guim Bernat further teaches wherein the installation of the application comprises storing the applications in a container on the remote edge device (see Guim Bernat, FIG. 9; and [0012]: “FIG. 9 illustrates various compute arrangements deploying containers in an edge computing system.”). As per claims 26-28, which respectively depend on claims 1, 11, and 20, Guim Bernat further teaches wherein the remote edge device comprises a storage appliance that creates a hash value of data stored on the remote edge device and records the hash value to a blockchain ledger to establish a historical transaction (see Guim Bernat, [0427]: “The attestations of intermediate actions may be remembered and made available for subsequent queries. For example, data and code “trust” may be represented as a cryptographic hash. A hash tree of attested resources can be used to keep a current “accepted” attestation value of everything that preceded it. A centralized approach would maintain the hash tree updates and replicate query-optimized copies as an “attestation cache”; a distributed approach may utilize a blockchain.”; [0895]: “For example, a set of distributed ledgers are responsible to deploy monitoring and attestation software elements in the respective distributed edges. These monitoring elements may be used to track services, such as when a particular service is executed.”; [0898]: “The EANS may provide a set of distributed ledger services to monitor and attest hardware or software computing resources accessible via edge access networks or edge core networks.”); and [1079]: “The hash may be pre-computed and at block granularity over the original (e.g., unencrypted, plaintext, etc.) contents, so that it is both more efficient to compute a second level hash over the block hashes that are being sent, or more tamper resistant because any intermediary has to reverse engineer both the hash and the encryption in order to tamper.”). As per claims 29-31, which respectively depend on claims 1, 11, and 20, Guim Bernat further teaches wherein the node rental manager is configured to remove all installed applications from the remote edge device and remove the remote edge device from all joined peer groups when the remote edge device is made available for rent (see Guim Bernat, [0131]: “SDSi hardware also provides the ability for the infrastructure and resource owner to empower the silicon component (e.g., components of a composed system 442 that produce metric telemetry 440) to access and manage (add/remove) product features and freely scale hardware capabilities and utilization up and down. Furthermore, it provides the ability to provide deterministic feature assignments on a per-tenant basis. It also provides the capability to tie deterministic orchestration and service management to the dynamic (or subscription based) activation of features without the need to interrupt running services, client operations or by resetting or rebooting the system.”; [0208]: “The version of hardware, software, and firmware are adjusted appropriately. Possibly, this implies moving to a backward revision or passing over backward revisions to find and allocate resources according to the simulation defined environment. This may also involve removal of hardware, software, and firmware that isn't used by the workload.”; and [0446]: “The different orchestration entities running at the respective edge locations may pick a job to be executed in the local edge node. Once the job is performed, the job (service or FaaS) is removed from the queue (on an ongoing basis), and the data and job definition can be moved if needed and executed.”). Conclusion 7. For the reasons above, claims 1-5, 9-12, 15, 18-21, and 26-31 have been rejected and remain pending. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL Y WON whose telephone number is (571)272-3993. The examiner can normally be reached on Wk.1: M-F: 8-5 PST & Wk.2: M-Th: 8-7 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R Taylor can be reached on 571-272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Michael Won/Primary Examiner, Art Unit 2443
Read full office action

Prosecution Timeline

Jan 16, 2023
Application Filed
Aug 24, 2023
Non-Final Rejection — §103
Jan 29, 2024
Response Filed
Feb 20, 2024
Final Rejection — §103
Apr 26, 2024
Response after Non-Final Action
May 06, 2024
Final Rejection — §103
Jul 24, 2024
Applicant Interview (Telephonic)
Jul 25, 2024
Final Rejection — §103
Aug 20, 2024
Request for Continued Examination
Aug 25, 2024
Response after Non-Final Action
Aug 27, 2024
Non-Final Rejection — §103
Sep 17, 2024
Interview Requested
Sep 24, 2024
Applicant Interview (Telephonic)
Sep 24, 2024
Examiner Interview Summary
Oct 22, 2024
Response Filed
Nov 05, 2024
Final Rejection — §103
Dec 03, 2024
Interview Requested
Dec 17, 2024
Response after Non-Final Action
Dec 17, 2024
Notice of Allowance
Jan 08, 2025
Response after Non-Final Action
Feb 12, 2025
Response after Non-Final Action
Feb 14, 2025
Response after Non-Final Action
Mar 10, 2025
Non-Final Rejection — §103
Aug 07, 2025
Response Filed
Aug 25, 2025
Final Rejection — §103
Sep 23, 2025
Interview Requested
Oct 23, 2025
Request for Continued Examination
Oct 30, 2025
Response after Non-Final Action
Nov 19, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598204
FEDERATED ABNORMAL PROCESS DETECTION FOR KUBERNETES CLUSTERS
2y 5m to grant Granted Apr 07, 2026
Patent 12596959
METHOD FOR COLLABORATIVE MACHINE LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12592926
RISK ASSESSMENT FOR PERSONALLY IDENTIFIABLE INFORMATION ASSOCIATED WITH CONTROLLING INTERACTIONS BETWEEN COMPUTING SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12587507
CONTROLLER-ENABLED DISCOVERY OF SD-WAN EDGE DEVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12580929
TECHNIQUES FOR ASSESSING MALWARE CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+28.7%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 835 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month