Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,370

MULTI-LAYER DISTRIBUTED PROCESSING SYSTEM FOR USER SERVICE REQUESTS IN DIGITAL TWIN ENVIRONMENT

Non-Final OA §103
Filed
Dec 07, 2023
Examiner
ANYA, CHARLES E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Korea Electronics Technology Institute
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
727 granted / 891 resolved
+26.6% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
932
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-9 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, and 5 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2013/0085993 A1 to LI et al. in view of U.S. Pub. No. 2015/0046929 A1 to Kumar et al. As to claim 1, LI teaches a multi-layer distributed processing system for user service requests in a digital twin environment, comprising: a central server (Root Node 201/Parent Node 502/Node 911) including a central service manager, the central service manager configured to receive service requests (initial message t0/Message 1001) from users (Web Browsers 204) (“…Second, the administrator uses a web browser to interact with the root node. These interactions trigger the nodes in the tree to exchange messages using one of three management service processes…For clarity, only three nodes (201, 202, 203) and an administrative web browser 204 are shown in FIG. 1, wherein node 201 is a root node. The messages exchanged are numbered in FIG. 2 as t0 through t15…Upon receipt of an initial message t0 from administrative web browser 204 to root node 201, REST service discovery process 210 includes parallel and recursive GET requests initiated by root node 201, as illustrated by message t1 and message t2. In service discovery process 210, the V(P, S.sub.i) and rc(P, S.sub.i, K.sub.j) values on the child nodes 202 and 203 are calculated in parallel according to Equations (1) and (2). The values calculated by child nodes 202 and 203 are sent back in response message t3 and response message t4, from child nodes 202 and 203 respectively, to the root node 201…” paragraphs 0069/0070/0085/0099); a plurality of local servers (Child Nodes (Services) 202/203/Child Node 503/Nodes 912/913) including a local service manager, the local service manager configured to receive the service request (recursive GET requests) from the central service manager (Root Node 201/Parent Node 502/Node 911) (“…Upon receipt of an initial message t0 from administrative web browser 204 to root node 201, REST service discovery process 210 includes parallel and recursive GET requests initiated by root node 201, as illustrated by message t1 and message t2. In service discovery process 210, the V(P, S.sub.i) and rc(P, S.sub.i, K.sub.j) values on the child nodes 202 and 203 are calculated in parallel according to Equations (1) and (2). The values calculated by child nodes 202 and 203 are sent back in response message t3 and response message t4, from child nodes 202 and 203 respectively, to the root node 201…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraphs 0070/0085/0093); and a plurality of edge devices (grandchild) including an edge service manager, the edge service manager configured to receive the service request (sending a request) from the local server (Child Nodes (Services) 202/203/Child Node 503) (“…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraph 0093) and wherein the central service manager is configured to distribute the service requests from the users to lower layers through propagation between the service managers (“…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraph 0093). LI is silent with reference to the central service manager configured to create a first process for processing the service requests, the local service manager configured to create a second process for processing the service request and edge service manager configured to create a third process for processing the service request. Kumar teaches the central service manager configured to create a first process for processing the service requests (Parent Process 18), the local service manager configured to create a second process for processing the service request (Subprocess 50) and edge service manager configured to create a third process for processing the service request (Other Partner Services/Processes 58). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI with the teaching of Kumar because the teaching of Kumar would improve the system of LI by providing a chain or tree of related processes for processing web service requests. As to claim 2, Kumar teaches the system of claim 1, wherein the service request from the user is configured to be received only by the central server and is configured to be hierarchically propagated to the local servers and the edge devices through the central server (one or more composites, e.g., the first composite 46 or the second composite 48/ Other Partner Services/Processes 58) (“…For an asynchronous request from the subprocess to another partner process 58, a conversation identification for a normalized WS-addressing message may be sent to the partner service 58 along with instance identification (parent process instance ID) identifying an instance of the parent process 18, thereby facilitating routing of callback messages to the right instance of the parent process 18. Note that the other partner services and/or business processes 58 may be part of one or more composites, e.g., the first composite 46 or the second composite 48. WS-addressing supports the use of asynchronous interactions by specifying a common SOAP header containing an EndPoint Reference (EPR) to which the response is to be sent… Specific subprocesses discussed herein may include standalone subprocesses or inline subprocesses. Subprocesses that are specified in one or more separate files that may be accessible to one or more different parent processes are called standalone subprocesses. Subprocesses that are specified or defined in the code of a parent process are said to be defined inline, and are called inline subprocesses herein. A given subprocess may itself include one or more inline subprocesses, and/or may call one or more standalone subprocesses. Hence, various types of subprocesses discussed herein can be nested…” paragraphs 0080/0120). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI with the teaching of Kumar because the teaching of Kumar would improve the system of LI by providing a chain or tree of related processes for processing web service requests. As to claim 3, Li teaches the system of claim 2, wherein the service request includes process information data to be applied to all of the central server, the local servers, and the edge devices (“…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraph 0093). As to claim 5, Kumar teaches the system of claim 1, wherein for the service request, each of the central server, the local servers, and the edge devices is configured to create a process (one or more composites, e.g., the first composite 46 or the second composite 48/ Other Partner Services/Processes 58) (“…For an asynchronous request from the subprocess to another partner process 58, a conversation identification for a normalized WS-addressing message may be sent to the partner service 58 along with instance identification (parent process instance ID) identifying an instance of the parent process 18, thereby facilitating routing of callback messages to the right instance of the parent process 18. Note that the other partner services and/or business processes 58 may be part of one or more composites, e.g., the first composite 46 or the second composite 48. WS-addressing supports the use of asynchronous interactions by specifying a common SOAP header containing an EndPoint Reference (EPR) to which the response is to be sent… Specific subprocesses discussed herein may include standalone subprocesses or inline subprocesses. Subprocesses that are specified in one or more separate files that may be accessible to one or more different parent processes are called standalone subprocesses. Subprocesses that are specified or defined in the code of a parent process are said to be defined inline, and are called inline subprocesses herein. A given subprocess may itself include one or more inline subprocesses, and/or may call one or more standalone subprocesses. Hence, various types of subprocesses discussed herein can be nested…” paragraphs 0080/0120). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI with the teaching of Kumar because the teaching of Kumar would improve the system of LI by providing a chain or tree of related processes for processing web service requests. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2013/0085993 A1 to LI et al. in view of U.S. Pub. No. 2015/0046929 A1 to Kumar et al., as applied to claim 3 above, and further in view of W.O. No. 2019190545 A1 to Guim et al. and further in view of U.S. Pub. No. 2019/0065259 A1 to Venkata. As to claim 4, LI as modified by Kumar teaches the system of claim 3, however it is silent with reference to wherein the process information data includes a program for actual operation, input data link and format, output data link and format, and a time to be delivered after processing. Guim teaches wherein the process information data (Service Requests 104) includes a program for actual operation (service type identifiers and/or service identifiers indicative of what functions as a service are being requested in the corresponding service requests 104), input data link and format (the service requests 104 may be in the form of“ExecFaaS(FunclD, Payload or Inputs, Service Provider ID, Tenant ID, Max Cost, Result Time, Priority ID, SLA ID, Resource Type), and a time to be delivered after processing (deadlines as future timestamp values stored in fields of service requests 104) (“…For example, the client devices 105 provide service parameters in the service requests 104 that are indicative of at least one of deadlines or round-trip latencies by which results of corresponding ones of the service requests 104 are to be received at corresponding ones of the client devices 105. Client devices 105 may specify deadlines as future timestamp values stored in fields of service requests 104…To specify round-trip latencies, client devices 105 may store round-trip latency duration values in fields of service requests 104, and the client devices 105 may also store start timestamps in the service requests 104 indicative of times at which the client devices 105 sent the service requests 104. In this manner, the gateway-level HQM 106 can determine the amount of time that has elapsed since the time the client device 105 sent the service request 104 to identify how much time remains to provide a result to the client device 105. Example service parameters can also include service type identifiers and/or service identifiers indicative of what functions as a service are being requested in the corresponding service requests 104. A service type identifier indicates a type of a function as a service that is being requested in a service request 104 (e.g., ServiceType =“ImageProcessing”). A service identifier indicates a specific instance of a function as a service that is being requested in a service request 104 (e.g., Service = “ImageProcessingFaceRecognition”). The service type identifiers and the service identifiers may be stored by the client devices 105 in fields of the service requests 104…The example client devices 105 may send the service requests 104 based on a uniform resource locator (URL) or an internet protocol (IP) address of the edge gateway 102. An example format that the client devices 105 can use to generate the service requests 104 may be in the form of“ExecFaaS(FunclD, Payload or Inputs, Service Provider ID, Tenant ID, Max Cost, Result Time, Priority ID, SLA ID, Resource Type),” which includes a number of service parameters as follows. In this example format, the“FuncID” service parameter specifies a service type identifier or a service identifier of a function as a service to which a service request 104 is directed. The example“Payload or Inputs” service parameter includes data to be processed by the requested function as a service. The example“Service Provider ID” service parameter specifies an FaaS/AFaaS service provider that is to provide the requested function as a service. Such service provider may be the provider to which users subscribe to access functions as a service. The example“Tenant ID” service parameter identifies the subscriber/customer of FaaS/AFaaS services…The example“Result Time” service parameter specifies a deadline or a round-trip latency defining a time-constraint by which a result of a function as a service is expected to be received at a requesting client device 105. As such, the“Result Time” service parameter corresponds to a deadline service parameter or a round-trip latency service parameter of the service request 104. In other examples, the“Result Time” service parameter may be omitted from the service request 104, and a deadline service parameter or a round-trip latency service parameter may instead be provided via the “SLA ID” service parameter as described below…” paragraphs 0015-0017). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI and Kumar with the teaching of Guim because the teaching of Guim would improve the system of LI and Kumar by providing service request arguments for executing a task. Venkata teaches wherein the process information data includes output data link and format (a set of service requests to identify inputs and outputs of the service requests) (“…In an embodiment, the system generates attribute models using the machine-learning model. The system may analyze a set of service requests to identify inputs and outputs of the service requests. Based on the inputs and outputs of the service requests, the system generates models. Two models may have some, or all, attributes in common. As an example, five people have submitted service requests including the attributes source, destination, and departure date…” paragraph 0097). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI, Kumar and Guim with the teaching of Venkata because the teaching of Venkata would improve the system of LI, Kumar and Guim by providing service request arguments template for returning a result of processing a service request task. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2013/0085993 A1 to LI et al. in view of U.S. Pub. No. 2015/0046929 A1 to Kumar et al., as applied to claim 1 above, and further in view of E.P.O No. 2883329 B1 Wang et al. As to claim 7, LI as modified by Kumar teaches the system of claim 1, however is silent with reference to wherein in response to receiving the service request, the edge device is configured to check whether the service request is processible, and report whether the service request is processible to an upper device through a path opposite to the path through which the service request is received. Wang teaches wherein in response to receiving the service request, the edge device (wherein each of the three service clusters is configured to report) is configured to check whether the service request is processible, and report whether the service request is processible to an upper device through a path opposite to the path through which the service request is received (report the updated dynamic disaster recovery policy of each of the three clusters to a corresponding client) (“…The computer information system of claim 3, wherein each of the three service clusters is configured to report its load balance to the disaster recovery node at a first predefined time interval and the disaster recovery node is configured to report the updated dynamic disaster recovery policy of each of the three clusters to a corresponding client at a second predefined time interval…” ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI and Kumar with the teaching of Wang because the teaching of Wang would improve the system of LI and Kumar by providing a technique for notifying a user or client of malfunctioning service node. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2013/0085993 A1 to LI et al. in view of U.S. Pub. No. 2015/0046929 A1 to Kumar et al., and further in view of C.N. No. 102422616 A to Buckl et al. As to claim 9, LI teaches a multi-layer distributed processing system for user service requests in a digital twin environment, comprising: a plurality of user terminals (Web Browsers 204) configured to receive service requests from users (“…Second, the administrator uses a web browser to interact with the root node. These interactions trigger the nodes in the tree to exchange messages using one of three management service processes…For clarity, only three nodes (201, 202, 203) and an administrative web browser 204 are shown in FIG. 1, wherein node 201 is a root node. The messages exchanged are numbered in FIG. 2 as t0 through t15…Upon receipt of an initial message t0 from administrative web browser 204 to root node 201, REST service discovery process 210 includes parallel and recursive GET requests initiated by root node 201, as illustrated by message t1 and message t2. In service discovery process 210, the V(P, S.sub.i) and rc(P, S.sub.i, K.sub.j) values on the child nodes 202 and 203 are calculated in parallel according to Equations (1) and (2). The values calculated by child nodes 202 and 203 are sent back in response message t3 and response message t4, from child nodes 202 and 203 respectively, to the root node 201…” paragraphs 0069/0070/0085/0099); a central server including a central service manager, the central service manager configured to receive the service requests from the plurality of user terminals (Root Node 201/Parent Node 502/Node 911) (“…Upon receipt of an initial message t0 from administrative web browser 204 to root node 201, REST service discovery process 210 includes parallel and recursive GET requests initiated by root node 201, as illustrated by message t1 and message t2. In service discovery process 210, the V(P, S.sub.i) and rc(P, S.sub.i, K.sub.j) values on the child nodes 202 and 203 are calculated in parallel according to Equations (1) and (2). The values calculated by child nodes 202 and 203 are sent back in response message t3 and response message t4, from child nodes 202 and 203 respectively, to the root node 201…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraphs 0070/0085/0093); a plurality of local servers (Child Nodes (Services) 202/203/Child Node 503/Nodes 912/913) including a local service manager, the local service manager configured to receive the service request from the central service manager (Root Node 201/Parent Node 502/Node 911) (“…Upon receipt of an initial message t0 from administrative web browser 204 to root node 201, REST service discovery process 210 includes parallel and recursive GET requests initiated by root node 201, as illustrated by message t1 and message t2. In service discovery process 210, the V(P, S.sub.i) and rc(P, S.sub.i, K.sub.j) values on the child nodes 202 and 203 are calculated in parallel according to Equations (1) and (2). The values calculated by child nodes 202 and 203 are sent back in response message t3 and response message t4, from child nodes 202 and 203 respectively, to the root node 201…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraphs 0070/0085/0093); a plurality of edge devices (grandchild) including an edge service manager, the edge service manager configured to receive the service request from the local server (Child Nodes (Services) 202/203/Child Node 503) (“…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraph 0093); and wherein the central service manager is configured to distribute the service requests from the users to lower layers through propagation between the service managers (“…For efficiency, a parent service sends identical service requests in parallel to its child services and combines the partial results from the children into the final response. The child services process a request recursively as its parent, thus sending a request to a grandchild…” paragraph 0093). LI is silent with reference to the central service manager configured to create a first process for processing the service requests, the local service manager configured to create a second process for processing the service request, the edge service manager configured to receive the service request from the local server and create a third process for processing the service request, and a sensor connected to the plurality of edge devices and configured to provide sensing data for the service requests. Kumar teaches central service manager configured to create a first process for processing the service requests (Parent Process 18), the local service manager configured to create a second process for processing the service request (Subprocess 50), the edge service manager configured to receive the service request from the local server and create a third process for processing the service request (Other Partner Services/Processes 58). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI with the teaching of Kumar because the teaching of Kumar would improve the system of LI by providing a chain or tree of related processes for processing web service requests. Buckl teaches a sensor (sensor) connected to the plurality of edge devices (service chain) and configured to provide sensing data for the service requests (…In a particularly preferred embodiment, the first service the number comprises one or more Web services, the Web service sufficient claims by the present technology. as for a first protocol message of the first service, preferably using the same SOAP=Simple Object Access Protocol SOAP protocol (simple object access protocol) by the present technology disclosed herein. the second service is preferably distributed in the network service, wherein the network, specifically the so-called embedded network. embedded network has been sufficiently disclosed by the prior art, and makes it possible to service embedded in a suitable network node, the network node may be, for example, sensor, actuator or other electronic unit. Here, the second service at least in part by one or more specified through the second protocol of the service chain are connected in the network…between the discussed first and second service before the communication according to the invention by means of FIG. 1 to FIG. 3 in the form of embedded service in a network formed by a plurality of network nodes of a description of data-driven quality-of-service. According to FIG. 1, the network comprises network nodes N1, N2, N3, N4 and N5, these network nodes may represent a device of different type can be wireless and/or wired communication with each other. in particular these network nodes may represent a sensor, actuator, pure computation unit and a combination of these device. on the multiple nodes implementing a second service, the second service through with reference numerals SE1, SE2, SE3, SE4 and SE5 of the gear. In this way a plurality of second service through distributed in the network, wherein the second service can communicate with each other through the second protocol, and can mutually connect. the second protocol based on the data in the network of FIG. 1 in driven communication...associated with data input and data output requests and responses for the Web service with the embedded service, for instance, can be realized by the WSDL document. the corresponding data of the document provided by a service bridge, and are expected for the corresponding Web service request and response thereof in service based on the request, the bridge service of the service with the corresponding data input and output. in the bridge, which further specifies a respective service chain until at the upper service from for output data on which to perform the data input of a service. The service chain can be optimized by a proper way, as reference to FIG. 1 to FIG. 3 described above. In another embodiment, the WSDL document does not need to have been in service bridge, but when necessary, that is to say in the Web service through the URI to that generates the corresponding addressing….” paragraphs 0009/0023/0039). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of LI and Kumar with the teaching of Buckl because the teaching of Kumar would improve the system of LI and Kumar by providing a sensors for capturing or sensing related information. Allowable Subject Matter Claims 6 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Reasons for Allowance The following is an examiner’s statement of reasons for allowance: The closest prior art of records, (U.S. Pub. No. 2013/0085993 A1 to LI et al. and U.S. Pub. No. 2015/0046929 A1 to Kumar et al.), taken alone or in combination do not specifically disclose or suggest the claimed recitations (claims 4 and 6-8), when taken in the context of claims as a whole. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Pub. No. 2007/0150598 A1 to Potier et al. and directed to system and method that allows asynchronous requests and responses to be available within a given JMS serve. U.S. Pub. No. 2017/0277616 A1 to Topiwala et al. and directed to a methods for capturing invocation traffic of software services. U.S. Pub. No. 20070226745 A1 to Haas et al. and directed to a method and system for performing tasks when processing a client service request. The service request is processed by a group of processing elements including a main processing element and at least one offloading processing element. U.S. Pat. No. 8,495,170 B1 issued to Vosshall et al. and directed to a computer-implemented system and method for managing service requests where the system includes a service provider, having a number of server devices. U.S. Pub. No. 20220191648 A1 to Smith et al. and directed to systems and techniques for digital twin framework for next generation networks. U.S. Pat. No. 6,195,682 B1 issued to HO et al. and directed to a distributed information network in which a broker server is coupled to a plurality of child servers and to a plurality of clients in the network. U.S. Pub. No. 20210389983 A1 to Blue et al. and directed to systems and methods that effectively serve to isolate processes in a computing environment. U.S. Pub. No. 20090205034 A1 to Williams et al. and directed to systems and methods for creating a secure process on a web server can include creating an application manager process, and creating an application host process, U.S. Pub. No. 2006/0259487 A1 to Havens et al. and method for Creating Secure Process Objects. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES E ANYA/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Mar 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591471
KNOWLEDGE GRAPH REPRESENTATION OF CHANGES BETWEEN DIFFERENT VERSIONS OF APPLICATION PROGRAMMING INTERFACES
2y 5m to grant Granted Mar 31, 2026
Patent 12591455
PARAMETER-BASED ADAPTIVE SCHEDULING OF JOBS
2y 5m to grant Granted Mar 31, 2026
Patent 12585510
METHOD AND SYSTEM FOR AUTOMATED EVENT MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579014
METHOD AND A SYSTEM FOR PROCESSING USER EVENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572393
CONTAINER CROSS-CLUSTER CAPACITY SCALING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+33.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month