Prosecution Insights
Last updated: April 19, 2026
Application No. 18/394,439

UNIVERSAL API STANDARD AND HUB FOR CLOUD INTEROPERABILITY

Non-Final OA §102§103
Filed
Dec 22, 2023
Examiner
MILLER, DANIEL E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Anantyx LLC
OA Round
1 (Non-Final)
41%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 41% of resolved cases
41%
Career Allow Rate
22 granted / 54 resolved
-14.3% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
10 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
22.3%
-17.7% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 54 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Specification The disclosure is objected to because of the following informalities: In paragraph [0031] line 4, “MCIH 120” should read “MCIH 102”. In paragraph [0046] lines 1-2, the phrase “as shown at step 504” should be included. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1- 6, 8, and 18- 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 7,370,335 B1 (White) With respect to claim 1, White teaches A method of enabling user access of cloud services provided by multiple distinct cloud computing systems, the method comprising ( method summarized in the context of FIG. 6, [col 25 ln 51-64]; where the workflow engines running on servers 601 are cloud services by being accessible through the internet 670: "Further, each workflow engine server 601, application server 609 and immediate server 608 can comprise software programming and network interfaces operable to communicate over network 670. Network 670 can comprise any computer network known in the art operable to transport data, including LANS, WANs, Global Area Networks, such as the Internet, and wireless networks, [col 25 ln 44-50] ) : receiving, via an application programming interface (API) gateway of a cloud-connected system, an initial API call and a target cloud service ( When a call/request/command is received from an application ( e.g., application 366), public API 350..., [col 25 ln 52-54]; note that in FIG. 6, application 366 is connected to public API 350 through internet 670, [col 25 ln 44-50]; regarding the "target cloud service", the call is to a WfProcessMgr object, see [col 8 ln 25-43]; essentially, every WfProcessMgr represents a single workflow process definition, so a call to that definition gets routed to the appropriate adapter; a documentation example is given in Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition ) ; determining, via a universal API hub of the cloud-connected system, brand-specific API formatting for the target cloud service ( public API 350 can forward the call or command to the appropriate adapter in adapter layer 330 (e.g., adapter 332), [col 25 ln 54-56]; again see [col 8 ln 25-43], "Furthermore, a WfProcessMgr object can be used to forward calls from an application interfacing with Public API 350 to the appropriate adapter in adapter layer 330 (i.e., the adapter corresponding to the workflow engine with which the WfProcessMgr object corresponds). For example, if a call is 40 made to a WfProjectMgr object representing a run time instance processed by workflow engine 312, the WfProjectMgr object can route the request to adapter 332, [col 8 ln 36-44] ) ; transforming, via a transformation and communication engine of the cloud- connected system, the initial API call into a brand-specific API call using the brand-specific API formatting ( Adapter 332 can translate the call or command into a native call or command for a particular workflow engine (e.g., workflow engine 312) ..., [col 24 56-58]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands" ) ; and communicating, via the transformation and communication engine, the brand-specific API call to the target cloud service ( Adapter 332... and forward the call or command to workflow engine 312 via the workflow engines API (e.g., workflow engine API 322), [col 24 56-60]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands"; in FIG. 6 these are cloud services by being accessible over the internet, [col 25 ln 44-50] ) . With respect to claim 2, White teaches all of the limitations of claim 1, as noted above. White further teaches wherein the initial API call is received in a universal API format without brand-specific API formatting (More particularly, embodiments of the present invention can provide a system and method for mapping vendor-specific workflow engine APIs to a standardized APL , [col 3 ln 11-14]; The software programming can comprise a public API layer, further comprising an object model containing generic software objects representing underlying workflow processes and functionality, [col 3 ln 17-20]; the actual generic API calls are shown in Table 1 in the left column and the vendor-specific translations are shown in the right column, [cols 9-22]). With respect to claim 3, White teac hes all of the limitations of claim 2, as noted above. White further teaches receiving, via input of a user device or client application server, a desired cloud operation to be performed on the target cloud service; and transforming, via the universal API hub of the cloud-connected system, the desired cloud operation into a universal API format (process definition 215 can be imported into a particular application in application layer 360 (e.g., application 364). The application can then be propagate process definition 215 down to public API layer 350 where it can be represented by a WfDefinition object, [col 25 ln 1-5]; see FIG. 3, which shows application layer 360 with application 366, and see also FIG. 6, which shows how the application layer can be implemented on application servers 609, which host applications such as 366; see also Fig. 5 showing a GUI for creating the workflow definitions). With respect to claim 4, White teaches all of the limitations of claim 3, as noted above. White further teaches wherein the input of the user device or client application server is received via a web browser or a client application on the user device or client application server (process definition 215 can be imported into a particular application in application layer 360 (e.g., application 364), [col 25 ln 1-5]; see FIG. 3, which shows application layer 360 with application 366, and see also FIG. 6, which shows how the application layer can be implemented on application servers 609, which host applications such as 366; see also Fig. 5 showing a GUI for creating the workflow definitions). With respect to claim 5, White teaches all of the limitations of claim 1, as noted above. White further teaches wherein the initial API call is transformed via one or more microservices of the transformation and communication engine corresponding to the target cloud service ( Adapter 332 can translate the call or command into a native call or command for a particular workflow engine (e.g., workflow engine 312) ..., [col 24 56-58]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands" ; the microservice here refers to a particular adapter (such as 332 in FIG. 6) of the many adapters (332, 336, 338 in FIG. 6); also note in FIG. 6, how the internet layer 670 is between the application layer 609 and intermediate layer 608 as well as between intermediate layer 608 and workflow engine servers 601) . With respect to claim 6, White teaches all of the limitations of claim 1, as noted above. White further teaches wherein the brand-specific API call is communicated via one or more microservices of the transformation and communication engine corresponding to the target cloud service ( Adapter 332... and forward the call or command to workflow engine 312 via the workflow engines API (e.g., workflow engine API 322), [col 24 56-60]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands"; in FIG. 6 these are cloud services by being accessible over the internet, [col 25 ln 44-50] ; the microservice here refers to a particular adapter (such as 332 in FIG. 6) of the many adapters (332, 336, 338 in FIG. 6); also note in FIG. 6, how the internet layer 670 is between the application layer 609 and intermediate layer 608 as well as between intermediate layer 608 and workflow engine servers 601) . With respect to claim 8, White teaches all of the limitations of claim 1, as noted above. White further teaches receiving, via the transformation and communication engine, a response from the target cloud service corresponding to the cloud-brand specific API call; transforming, via the transformation and communication en gin e (adapter layer 330 in FIG. 6) , the response into a universal API formatted response; and returning the universal API formatted response to a source of the initial API call (Adapter 332 can also map any response provided by work flow engine 312 to the generic objects of pubic API layer 350, [col 25 ln 61-63] ; where the workflow engines running on servers 601 are cloud services by being accessible through the internet 670: "Further, each workflow engine server 601, application server 609 and immediate server 608 can comprise software programming and network interfaces operable to communicate over network 670. Network 670 can comprise any computer network known in the art operable to transport data, including LANS, WANs, Global Area Networks, such as the Internet, and wireless networks, [col 25 ln 44-50] ) . With respect to claim 18, White teaches A method of adding further microservices to a cloud-connected system for accessing cloud services provided by multiple distinct cloud computing systems, the method comprising (see [col 7 ln 57]-[col 8 ln 24], FIG. 5 for the creation tool, [col 24 ln 50]-[col 25 ln 5]; and FIG. 6 for the end resulting system after the microservices are added) : receiving, from a developer user, API call translation information for a new cloud provider and/or a new cloud service of an existing cloud provider (In operation, an organization or user can create a process definition 215 using modeling and definition tools 212. The user can import the process definition 215 to public API 350, [col 7 ln 65-68]; Using modeling and definition tool 212, a user can define a process by linking or connecting representation of manual and automatic activities, [col 24 ln 56-58]; process definition 215 can be imported into a particular application in application layer 360 (e.g., application 364). The application can then be propagate process definition 215 down to public API layer 350 where it can be represented by a WfDefinition object, [col 25 ln 1-5]) ; identifying, via the cloud-connected system, the cloud brand and the cloud service of the API call translation information (Upon importation of process definition 215, the adapters of adapter layer 330 can translate process definition 215 into the process definition syntax that can be used by each workflow engine. For example, adapter 336 can translate process definition 215 into a syntax usable by workflow engine 312 ( e.g., can translate XPDL into an FDL representation). Thus, both a standard process definition representation can be maintained (i.e., as a WfDefinition object) at public API 350 and a vendor-specific process definition representation can be maintained at each workflow engine (i.e., a vendor-specific object for process definition 215 can be maintained in persistent storage 370 for each workflow engine in workflow engine layer 310, [col 8 ln 8-20]) ; and establishing a new microservice of a transformation and communication engine for the cloud service with instructions for transformation of API calls using the API call translation information (this manner, workflow engines can operate with a native process definition representation while applications in application layer 360 can be written to a standard process definition representation (i.e., the WfObjects ), [col 8 ln 20-24]; for the actual microservice in operation, see [col 25 ln 44-63]; the definition gets the API 350 and the workflow engine 312 working together, and this section explains how they do it). With respect to claim 19, White teaches all of the limitations of claim 18, as noted above. White further teaches receiving, from a user device or client application server, an initial API call and a target cloud service, wherein the target cloud service corresponds to the new microservice (for creation of the process that uses 312 see [col 7 ln 65]-[col 8 ln 24]; When a call/request/command is received from an application ( e.g., application 366), public API 350..., [col 25 ln 52-54]; note that in FIG. 6, application 366 is connected to public API 350 through internet 670, [col 25 ln 44-50]; regarding the "target cloud service", the call is to a WfProcessMgr object, see [col 8 ln 25-43]; essentially, every WfProcessMgr represents a single workflow process definition, so a call to that definition gets routed to the appropriate adapter; a documentation example is given in Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition ) ; transforming, via the new microservice of the transformation and communication engine, the initial API call into a brand-specific API call using the API call translation information ( Adapter 332 can translate the call or command into a native call or command for a particular workflow engine (e.g., workflow engine 312) ..., [col 24 56-58]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands" ) ; communicating, via the new microservice of the transformation and communication engine, the brand-specific API call to the target cloud service ( Adapter 332... and forward the call or command to workflow engine 312 via the workflow engines API (e.g., workflow engine API 322), [col 24 56-60]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands"; in FIG. 6 these are cloud services by being accessible over the internet, [col 25 ln 44-50] ). With respect to claim 20, White teaches all of the limitations of claim 18, as noted above. White further teaches receiving, via the new microservice of the transformation and communication engine, a response from the target cloud service corresponding to the cloud-brand specific API call (In FIG. 4, see the arrow starting from engine 312, and going to adapter 322) ; transforming, via the new microservice of the transformation and communication engine, the response into a universal API formatted response using the API call translation information (Adapter 332 can also map any response provided by work flow engine 312 to the generic objects of pubic API layer 350, [col 25 ln 61-63]; where the workflow engines running on servers 601 are cloud services by being accessible through the internet 670: "Further, each workflow engine server 601, application server 609 and immediate server 608 can comprise software programming and network interfaces operable to communicate over network 670. Network 670 can comprise any computer network known in the art operable to transport data, including LANS, WANs, Global Area Networks, such as the Internet, and wireless networks, [col 25 ln 44-50] ; the microservice here refers to a particular adapter (such as 332 in FIG. 6) of the many adapters (332, 336, 338 in FIG. 6); also note in FIG. 6, how the internet layer 670 is between the application layer 609 and intermediate layer 608 as well as between intermediate layer 608 and workflow engine servers 601) ; and returning the universal API formatted response to the user device or client application server ((in FIG. 4, see the arrow going from Public API back to Application; where first “Adapter 332 can also map any response provided by workflow engine 312 to the generic objects of pubic API layer 350” , [col 25 ln 61-63]; and then “The WfRequestor object can represent a user responsible for creating a process instance. Put differently, a WfRequestor object can represent the owner of a process instance. As requestor, the user can be the notification target for significant events relating to the process instance, including escalation and completion, [col 9 ln 6-11]”) . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 7 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 7,370,335 B1 (White) in view of US 8,910,185 B2 (Dixon) With respect to claim 7, White teaches all of the limitations of claim 1, as noted above. White further teaches receiving, via the API gateway of the cloud-connected system, one or more further initial API calls to one or more target cloud services (API gateway 350 which is part of cloud connected system shown in FIG. 6, the actual method is shown in FIG. 4, which is for a plurality of “calls”: “ FIG. 4 is a schematic illustrating one embodiment of a method for serving client workflow requests, commands, and/or calls ”, [col 4 ln 14-17] ; a single process may include more than one call, “ start, stop, pause, resume, and so on ”, [col 8 ln 48]; and the system itself is designed accept and translate calls for at least two heterogeneous workflow engines [claim 25], “translating and mapping said set of generic software objects of said public API layer to and from said set of native software objects of said each workflow engine API through an API adapter layer having a plurality of adapters”, [col 30 ln 12-16]). White does not teach passing the initial API call and the one or more further initial API calls to a message queue of the cloud-connected system for processing. However, Dixon teaches and passing the initial API call and the one or more further initial API calls to a message queue of the cloud-connected system for processing ( “An API bridge service retrieves a generic API message request, placed in a request queue of a message queuing network by a message queuing application, from the request queue. The API bridge service formats the generic API request into a particular API call for at least one specific API, [Abstract] lines 1-5; see FIG. 1 showing the API request queue 124, “builds messages for placement in API request queue 124”, [col 4 ln 65-66]; and the system is built for multiple “calls” and “responses”, [col 6 ln 22-30]). It would have been obvious to one skilled in the art before the effective filing date to combine White with Dixon because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed features except for the message queue . Dixon teaches that both APIs and message queues exist in the prior art, but integrating “the message queuing application to call the specific APIs increases the complexity”, (Dixon [col 1 ln 42-43]) . The purpose of the invention in Dixon is to solve said need, (Dixon [col 1 ln 49]). The way Dixon solves the problem is “In the example, generic API interface 120 provides a time independent, asynchronous API interface to message queuing applications... such that to access specific API services, message queuing application 102 is not required to maintain complex coding for specific APIs or dependency on API bindings within the code of message queuing application 102”, (Dixon [col 8 ln 38-46]). Thus, a person having skill in the art would have a reasonable expectation of successfully integrating asynchronous communications without added complexity into the system and method of White by modifying White with message queue of Dixon. Therefore, it would have been obvious to combine White with Dixon to a person having ordinary skill in the art, and this claim is rejected under 35 U.S.C. 103. With respect to claim 9, White teaches all of the limitations of claim 7 , as noted above. White does not teach queueing, via a message queue of the cloud-connected system, the universal API formatted response for processing, wherein the universal API formatted response is queued for processing along with further API calls or further responses. However, Dixon teaches queueing, via a message queue of the cloud-connected system, the universal API formatted response for processing, wherein the universal API formatted response is queued for processing along with further API calls or further responses (Responsive to the API bridge service receiving at least one API specific response from at least one specific API, the API bridge service translates at least one API specific response into a response message comprising a generic API response. The API bridge service, places the response message in a response queue of the message queuing network, wherein the message queuing application listens to the response queue for the response message, [Abstract] lines 8-15). It would have been obvious to one skilled in the art before the effective filing date to combine White with Dixon because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed features except for the message queue . Dixon teaches that both APIs and message queues exist in the prior art, but integrating “the message queuing application to call the specific APIs increases the complexity”, (Dixon [col 1 ln 42-43]) . The purpose of the invention in Dixon is to solve said need, (Dixon [col 1 ln 49]). The way Dixon solves the problem is “In the example, generic API interface 120 provides a time independent, asynchronous API interface to message queuing applications... such that to access specific API services, message queuing application 102 is not required to maintain complex coding for specific APIs or dependency on API bindings within the code of message queuing application 102”, (Dixon [col 8 ln 38-46]). Thus, a person having skill in the art would have a reasonable expectation of successfully integrating asynchronous communications without added complexity into the system and method of White by modifying White with message queue of Dixon. Therefore, it would have been obvious to combine White with Dixon to a person having ordinary skill in the art, and this claim is rejected under 35 U.S.C. 103. Claim(s) 10 -11 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 7,370,335 B1 (White) in view of “ Persistent Object Service Specification ” (2000-OMG) With respect to claim 10, White teaches all of the limitations of claim 8 , as noted above. White further teaches storing the initial API call ... of the cloud-connected system ( each type of API call is stored as a “process definition”, which maps the API call to discrete activity steps, then the initial API call is stored as “process instance”, [col 4 ln 30-40]; the public API 350 can maintain a representation of the process definition 215 as an object in the public API 350, [col 8 ln 1-2] ; the initial API call creates a WfProcess object , see specifically “ WfProcess.createProcess ( WfRequestor requester, ProcessData data)”, [col 11 command 4] , see also see [col 8 ln 58-59] ; which creates the process instance associated with a WfDefinition object, and has the status “ open.not_running ”; then use “ getlnputProcessDatainfo ( )” to get the process info for that process, [col 11 command 6] ; The process definitions are maintained in persistent storage 370, [col 8 ln 14-24]; native objects are persistently stored, [see FIG. 4]; and the content created and edited by workflows is persistently stored, [col 22 ln 48-49] ) ; and updating ... to include the universal API formatted response corresponding to the initial API call ( In one embodiment of the present invention workflow definition 215 can be dynamically updated during by activities of run time phase 22, [col 6 ln 1-4]; Public API 350 can further comprise a WfPayload object to represent a "payload." The payload can associate various content items with a particular process instance, [col 21 ln 61-63] ). White does not describe a database for persisting the objects. Therefore, W hite does not teach storing ... in an entry of a database ..., and updating the entry of the database . However, 2000-OMG teaches storing ... in an entry of a database ... (see FIG. 2-1, [page 2-3]; the persistent object is created by the client and stored in the datastore as controlled by the PO interface, “Creating a PID for the PO and initializing the PID. For storage, whatever location information is not specified will be determined by the Datastore. For a retrieval or delete operation, the location information must be complete”, see [page 2-7 paragraph 3 bullet 1] see the interface specifically to see that entries are stored by PID , void store(PID p) , which is the persistent identifier [page s 2-7 and 2-8 ]; A PDS may use either a standard or a proprietary interface to its Datastore. A Datastore might be a file, virtual memory, some kind of database, or anything that can store information. This specification defines one Datastore interface that can be implemented by a variety of databases, [page 2-14 paragraph 2]) , and updating the entry of the database ... (see FIG. 2-1, [page 2-3]; The persistent state may be updated as operations are performed on the object. This operation returns the PDS that handles persistence for use by those Protocols that require the PO to call the PDS, [page 2-8 paragraph 1] ; see the interface specifically to see that entries are stored and updated by PID, PDS connect(PID p), which is the persistent identifier [pages 2-7 and 2-8] ) . It would have been obvious to one skilled in the art before the effective filing date to combine White with 2000-OMG because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed features except for a database . White provides some implementation details of how objects are created and stored, specifically using COBRA/IDL, which is a typo (should be CORBA/IDL), (see White [col 23 ln 52]). 2000-OMG is the specification for persisting objects within the CORBA/IDL specification and language, (see OMG [page iv paragraph 1]). 2000-OMG teaches : Figure 1-1 shows the participants in the Persistent Object Service. The state of the object can be considered in two parts: the dynamic state, which is typically in memory and is not likely to exist for the whole lifetime of the object (for example, it would not be preserved in the event of a system failure), and the persistent state, which the object could use to reconstruct the dynamic state. (See 2000-OMG [page 1-2 paragraph 1]). A person having skill in the art would have a reasonable expectation of successfully persisting state of objects in the event of system failure in the system and method of White by modifying White with the persistent object data store of 2000-OMG . Therefore, it would have been obvious to combine White with 2000-OMG to a person having ordinary skill in the art, and this claim is rejected under 35 U.S.C. 103. With respect to claim 11, White teaches A cloud-connected system for accessing cloud services provided by multiple distinct cloud computing systems, the cloud-connected system comprising (see system in FIG. 6, where the workflow engines running on servers 601 are cloud services by being accessible through the internet 670: "Further, each workflow engine server 601, application server 609 and immediate server 608 can comprise software programming and network interfaces operable to communicate over network 670. Network 670 can comprise any computer network known in the art operable to transport data, including LANS, WANs, Global Area Networks, such as the Internet, and wireless networks, [col 25 ln 44-50] ) : a processor (CPU 662, [col 25 ln 35]) ; a storage device (memory 626, [col 25 ln 36]) communicatively coupled to the processor (connected as shown in FIG. 6) , wherein the storage device is configured to store computer-executable instructions, which, when executed by a processor, cause the processor to (adapter layer 330 with components that represent instructions, stored in memory 525 and executing on CPU 662, [col 25 ln 35-43]) : receive, via a web server, one or more initial application programming interface (API) calls and one or more target cloud services from one or more user devices or client application servers ( When a call/request/command is received from an application ( e.g., application 366), public API 350..., [col 25 ln 52-54]; note that in FIG. 6, application 366 is connected to public API 350 through internet 670, [col 25 ln 44-50]; regarding the "target cloud service", the call is to a WfProcessMgr object, see [col 8 ln 25-43]; essentially, every WfProcessMgr represents a single workflow process definition, so a call to that definition gets routed to the appropriate adapter; a documentation example is given in Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition ) ; pass the initial API calls from the web server to an API gateway ( public API 350 can forward the call or command to the appropriate adapter in adapter layer 330 (e.g., adapter 332), [col 25 ln 54-56]; again see [col 8 ln 25-43], "Furthermore, a WfProcessMgr object can be used to forward calls from an application interfacing with Public API 350 to the appropriate adapter in adapter layer 330 (i.e., the adapter corresponding to the workflow engine with which the WfProcessMgr object corresponds). For example, if a call is 40 made to a WfProjectMgr object representing a run time instance processed by workflow engine 312, the WfProjectMgr object can route the request to adapter 332, [col 8 ln 36-44] ) ; transform, via a transformation and communication engine, the initial API calls into brand-specific formats of the target cloud services ( Adapter 332 can translate the call or command into a native call or command for a particular workflow engine (e.g., workflow engine 312) ..., [col 24 56- 58]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands" ) ; store the one or more transformed API calls ... (each type of API call is stored as a “process definition”, which maps the API call to discrete activity steps, then the initial API call is stored as “process instance”, [col 4 ln 30-40]; the public API 350 can maintain a representation of the process definition 215 as an object in the public API 350, [col 8 ln 1-2]; the initial API call creates a WfProcess object, see specifically “ WfProcess.createProcess ( WfRequestor requester, ProcessData data)”, [col 11 command 4], see also see [col 8 ln 58-59]; which creates the process instance associated with a WfDefinition object, and has the status “ open.not_running ”; then use “ getlnputProcessDatainfo ( )” to get the process info for that process, [col 11 command 6]; The process definitions are maintained in persistent storage 370, [col 8 ln 14-24]; native objects are persistently stored, [see FIG. 4]; and the content created and edited by workflows is persistently stored, [col 22 ln 48-49]) ; and call, via the transformation and communication engine, the target cloud services using the transformed API calls ( Adapter 332... and forward the call or command to workflow engine 312 via the workflow engines API (e.g., workflow engine API 322), [col 24 56-60]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12], where there are two WfProcessMgr . objects one that maps to a BEA template definition and one that maps to an IBM template definition, where BEA and IBM are "brands"; in FIG. 6 these are cloud services by being accessible over the internet, [col 25 ln 44-50] ) . White does not teach s tore ... in an entry of a database in communication with the universal API hub . However, 2000-OMG teaches store ... in an entry of a database in communication with the universal API hub (see FIG. 2-1, [page 2-3]; the persistent object is created by the client and stored in the datastore as controlled by the PO interface, “Creating a PID for the PO and initializing the PID. For storage, whatever location information is not specified will be determined by the Datastore. For a retrieval or delete operation, the location information must be complete”, see [page 2-7 paragraph 3 bullet 1] see the interface specifically to see that entries are stored by PID, void store(PID p), which is the persistent identifier [pages 2-7 and 2-8]; A PDS may use either a standard or a proprietary interface to its Datastore. A Datastore might be a file, virtual memory, some kind of database, or anything that can store information. This specification defines one Datastore interface that can be implemented by a variety of databases, [page 2-14 paragraph 2]) . It would have been obvious to one skilled in the art before the effective filing date to combine White with 2000-OMG because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed features except for a database . White provides some implementation details of how objects are created and stored, specifically using COBRA/IDL, which is a typo (should be CORBA/IDL), (see White [col 23 ln 52]). 2000-OMG is the specification for persisting objects within the CORBA/IDL specification and language, (see OMG [page iv paragraph 1]). 2000-OMG teaches : Figure 1-1 shows the participants in the Persistent Object Service. The state of the object can be considered in two parts: the dynamic state, which is typically in memory and is not likely to exist for the whole lifetime of the object (for example, it would not be preserved in the event of a system failure), and the persistent state, which the object could use to reconstruct the dynamic state. (See 2000-OMG [page 1-2 paragraph 1]). A person having skill in the art would have a reasonable expectation of successfully persisting state of objects in the event of system failure in the system and method of White by modifying White with the persistent object data store of 2000-OMG . Therefore, it would have been obvious to combine White with 2000-OMG to a person having ordinary skill in the art, and this claim is rejected under 35 U.S.C. 103. Claim(s) 12 -17 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 7,370,335 B1 (White) in view of “ Persistent Object Service Specification ” (2000-OMG) in further view of US 8,910,185 B2 (Dixon) With respect to claim 12, White in view of 2000-OMG teaches all of the limitations of claim 11, as noted above. White further teaches communicate, via one or more microservices of the transformation and communication engine, the transformed API calls to the target cloud services ( Adapter 332... and forward the call or command to workflow engine 312 via the workflow engines API (e.g., workflow engine API 322), [col 24 56-60]; throughout the application these are called "vendor specific APIs", [col 3 ln 13-14]; but again see Table 1 row 2, [col 11 and col 12] ; cloud services by being accessible through the internet 670: "Further, each workflow engine server 601, application server 609 and immediate server 608 can comprise software programming and network interfaces operable to communicate over network 670. Network 670 can comprise any computer network known in the art operable to transport data, including LANS, WANs, Global Area Networks, such as the Internet, and wireless networks, [col 25 ln 44-50] ) . White and 2000-OMG do not teach wherein the computer-executable instruction cause the processor to further: queue, via a queueing engine of the universal API hub, the initial API calls for transformation . However, Dixon teaches wherein the computer-executable instruction cause the processor to further: queue, via a queueing engine of the universal API hub, the initial API calls for transformation (“An API bridge service retrieves a generic API message request, placed in a request queue of a message queuing network by a message queuing application, from the request queue. The API bridge service formats the generic API request into a particular API call for at least one specific API, [Abstract] lines 1-5; see FIG. 1 showing the API request queue 124, “builds messages for placement in API request queue 124”, [col 4 ln 65-66]; and the system is built for multiple “calls” and “responses”, [col 6 ln 22-30]). It would have been obvious to one skilled in the art before the effective filing date to combine White in view of OMG with Dixon because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed features except for the message queue . Dixon teaches that both APIs and message queues exist in the prior art, but integrating “the message queuing application to call the specific APIs increases the complexity”, (Dixon [col 1 ln 42-43]) . The purpose of the invention in Dixon is to solve said need, (Dixon [col 1 ln 49]). The way Dixon solves the problem is “In the example, generic API interface 120 provides a time independent, asynchronous API interface to message queuing applications... such that to access specific API services, message queuing application 102 is not required to maintain complex coding for specific APIs or dependency on API bindings within the code of message queuing application 102”, (Dixon [col 8 ln 38-46]). Thus, a person having skill in the art would have a reasonable expectation of successfully integrating asynchronous communications without added complexity into the system and method of White in view of OMG by modifying White in view of OMG with message queue of Dixon. Therefore, it would have been obvious to combine White in view of OMG with Dixon to a person having ordinary skill in the art, and this claim is rejected under 35 U.S.C. 103. With respect to claim 13, White in view 2000-OMG teaches all of the limitations of claim 1 2 , as noted above. White further teaches wherein the computer-executable instruction cause the processor to further: receive, via the transformation and communication engine, one or more responses to the transformed API calls from the target cloud services (Adapter 332 can also map any response provided by work flow engine 312 to the generic objects of pubic API layer, [col 25 ln 61-63]; where the workflow engines running on servers 601 are cloud services by being accessible through the internet 670: "Further, each workflow engine server 601, application server 609 and immediate server 608 can comprise software programming and network interfaces operable to communicate over network 670. Network 670 can comprise any computer network known in the art operable to transport data, including LANS, WANs, Global Area Networks, such as the Internet, and wireless networks, [col 25 ln 44-50] ). White and 2000-OMG do not teach queue, via the queuing engine, the responses from the target cloud services for transformation. However, Dixon teaches queue, via the queuing engine, the responses from the target cloud services for transformation (Responsive to the API bridge service receiving at least one API specific response from at least one specific API, the API bridge service translates at least one API specific response into a response message comprising a generic API response. The API bridge service, places the response message in a response queue of the message queuing network, wherein the message queuing application listens to the response queue for the response message, [Abstract] lines 8-15). It would have been obvious to one skilled in the art before the effective filing date to combine White in view of OMG with Dixon because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed features except for the message queue . Dixon teaches that both APIs and message queues exist in the prior art, but integrating “the message queuing application to call the specific APIs increases the complexity”, (Dixon [col 1 ln 42-43]) . The purpose of the invention in Dixon is to solve said need, (Dixon [col 1 ln 49]). The way Dixon solves the problem is “In the example, generic API interface 120 provides a time independent, asynchronous API interface to message queuing applications... such that to access specific API services, message queuing application 102 is not required to maintain complex coding for specific APIs or dependency on API bindings within the code of message queuing application 102”, (Dixon [col 8 ln 38-46]). Thus, a person having skill in the art would have a reasonable expectation of successfully integrating asynchronous communications without added complexity into the system and method of White in view of OMG by modifying White in view of OMG with message queue of Dixon. Therefore, it would have been obvious to combine White in view of OMG with Dixon to a person having ordinary skill in the art, and this claim is rejected under 35 U.S.C. 103. With respect to claim 14, White in view of 2000-OMG teaches all of the limitations of claim 13, as noted above. White further teaches wherein the computer-executable instruction cause the processor to further: transform, via the one or more microservices of the transformation and communication engine, the responses into a universal API format (Adapter 332 can also map any response provided by work flow engine 312 to the generic objects of pubic API layer 350, [col 25 ln 61-63]; where the workflow engines running on servers 601 are cloud services by being accessible through the internet 670: "Further, each workflow engine server 601, application server 609 and immediate server 608 can comprise software programming and network interfaces operable to communicate over network 670. Network 670 can comprise any computer network known in the art operable to transport data, including LANS, WANs, Global Area Networks, such as the Internet, and wireless networks, [col 25 ln 44-50] ; the microservice here refers to a particular adapter (such as 332 in FIG. 6) of the many adapters (332, 336, 338 in FIG. 6); also note in FIG. 6, how the internet layer 670 is between the application layer 609 and intermediate layer 608 as well as between intermediate layer 608 and workflow engine servers 601) . With respect to claim 15, White in view of 2000-OMG teaches all of the limitations of claim 1 4 , as noted above. White further teaches ... of the transformed API calls to include the transformed responses from the target cloud services (In one embodiment of the present invention workflow definition 215 can be dynamically updated during by activities of run time phase 22, [col 6 ln 1-4]; Public API 350 can further comprise a WfPayload object to represent a "payload." The payload can associate various content items with a particular process instance, [col 21 ln 61-63]). White does not teach wherein the computer-executable instruction cause the processor to further: update the entry of the database of the transformed API calls to include the transformed responses from the target cloud services. However, 2000-OMG teaches wherein the computer-executable instruction cause the processor to further: update the entry of the database of the transformed API calls to include the transformed responses from the target cloud services (see FIG. 2-1, [page 2-3]; The persistent state may be updated as operations are performed on the object. This operation returns the PDS that handles persistence for use by those Protocols that require the PO to call the PDS, [page 2-8 paragraph 1]; see the interface specifically to see that entries are stored and updated by PID, PDS connect(PID p), which is the persistent identifier [pages 2-7 and 2-8]) . It would have been obvious to one skilled in the art before the effective filing date to combine White with 2000-OMG because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed features except for a database . White provides some implementation details of how objects are created and stored, specifically using COBRA/IDL, which is a typo (should be CORBA/IDL), (see White [col 23 ln 52]). 2000-OMG is the specification for persisting objects within the CORBA/IDL specification and language, (see OMG [page iv paragraph 1]). 2000-OMG teaches : Figure 1-1 shows the participants in the Persistent Object Service. The state of the object can be considered in two parts: the dynamic state, which is typically in memory and is not likely to exist for the whole lifetime of the object (for example, it would not be preserved in the event of a system failure), and the persistent state, which the object could use to reconstruct the dynamic state. (See 2000-OMG [page 1-2 paragraph 1]). A person having skill in the art would have a reasonable expectation of successfully persisting state of objects in the event of system failure in the system and method of White by modifying White with the persistent object data store of 2000-OMG . Therefore, it would have been obvious to combine White with 2000-OMG to a person having ordinary skill in the art, and this claim is rejected under 35 U.S.C. 103. With respect to claim 16, White in view of OMG teaches all of the limitations of claim 1 4 , as noted above. White and OMG do not teach wherein the computer-executable instruction cause the processor to further: queue, within a message queue, a message including the transformed responses to be returned to the API gateway. However, Dixon teaches wherein the computer-executable instruction cause the processor to further: queue, within a message queue, a message including the transformed responses to be returned to the API gateway (Responsive to the API bridge service receiving at least one API specific response from at least one specific API, the API bridge service translates at least one API specific response into a response message comprising a generic API response. The API bridge service, places the response message in a response queue of the message queuing network, wherein the message queuing application listens to the response queue for the response message, [Abstract] lines 8-15). It would have been obvious to one skilled in the art before the effective filing date to combine White with Dixon because a teaching, suggestion, or motivation in the prior art would have led one skilled in the art to combine prior art teaching to arrive at the claimed invention. White discloses a system and method that teaches all of the claimed feat
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Mar 26, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12421143
COOPERATIVE OPTIMAL CONTROL METHOD AND SYSTEM FOR WASTEWATER TREATMENT PROCESS
2y 5m to grant Granted Sep 23, 2025
Patent 12406113
COMPUTER-AIDED ENGINEERING TOOLKIT FOR SIMULATED TESTING OF PRESSURE-CONTROLLING COMPONENT DESIGNS
2y 5m to grant Granted Sep 02, 2025
Patent 12204835
STORAGE MEDIUM WHICH STORES INSTRUCTIONS FOR A SIMULATION METHOD IN A SEMICONDUCTOR DESIGN PROCESS, SEMICONDUCTOR DESIGN SYSTEM THAT PERFORMS THE SIMULATION METHOD IN THE SEMICONDUCTOR DESIGN PROCESS, AND SIMULATION METHOD IN THE SEMICONDUCTOR DESIGN PROCESS
2y 5m to grant Granted Jan 21, 2025
Patent 12154663
METHOD OF IDENTIFYING PROPERTIES OF MOLECULES UNDER OPEN BOUNDARY CONDITIONS
2y 5m to grant Granted Nov 26, 2024
Patent 12118279
Lattice Boltzmann Based Solver for High Speed Flows
2y 5m to grant Granted Oct 15, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
41%
Grant Probability
78%
With Interview (+36.9%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 54 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month