DETAILED ACTION
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to the 35 U.S.C. 103 rejections (Remarks pp. 1-3) have been fully considered but are moot in view of the Examiner’s new ground of rejections based on added references to address applicant’s amendments.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 10 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1) and Uluderya (US 8539080 B1).
Regarding Claim 1, Karaje teaches a hybrid-cloud infrastructure environment for application programming interface (API) call routing (
Karaje discloses, “Other deployments may be monitored, analyzed, or otherwise observed by the systems described herein, all of which are within the scope of the present disclosure. For the purposes of illustration and not as a limitation, additional examples can include multi-cloud deployments, on-premises environments, hybrid cloud environments, … and many others,” Col 70, Lines 28-38, and “API gateway 802 may be configured to perform API routing, which may include routing requests to instances of API server 806 based on an API call of the requests,” Col 77, Lines 28-31.),
the hybrid-cloud infrastructure environment comprising: at least one public-cloud resource (
Karaje discloses, “The various resources included in data platform 12 may reside in the cloud and/or be located on-premises,” Col 3, Lines 55-60.);
at least one private on-premise resource (
Karaje discloses, “The various resources included in data platform 12 may reside in the cloud and/or be located on-premises,” Col 3, Lines 55-60.);
and a computing system comprising: at least one processor (
Karaje discloses, “The embodiments described herein can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor,” Col 6, Lines 44-50.);
a communication interface communicatively coupled to the at least one processor (
Karaje discloses, “As shown in FIG. 1C, computing device 50 may include a communication interface 52, a processor 54, a storage device 56, and an input/output (“I/O”) module 58 communicatively connected one to another via a communication infrastructure 60,” Col 7, Lines 26-30.);
and a memory device storing executable code that, when executed, causes the at least one processor to (
Karaje discloses, “The embodiments described herein can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor,” Col 6, Lines 44-50.):
transfer (
Karaje discloses, “When agent service 132 receives an incoming connection, it can perform a variety of checks, such as to see whether the data is being provided by a current customer, and whether the data is being provided in an appropriate format. If the data is not appropriately formatted (and/or is not provided by a current customer), it may be rejected,” Col 15, Lines 62-67, and “If the data is appropriately formatted, agent service 132 may facilitate copying the received data to a streaming data stable storage using a streaming service (e.g., Amazon Kinesis and/or any other suitable streaming service),” Col 16, Lines 1-4.), the transferring comprising:
extracting data from the at least one data provider to a temporary location; transforming the extracted data into a format suitable for loading the data to a cloud operational data store and loading the transformed data to the cloud operational data store (
Karaje discloses, “Data store 30 may be implemented by any suitable data warehouse, data lake, data mart, and/or other type of database structure as may serve a particular implementation. Such data stores may be proprietary or may be embodied as vendor provided products or services such as, for example, Snowflake, Google BigQuery, Druid, Amazon Redshift, IBM db2, Dremio, Databricks Lakehouse Platform, Cloudera, Azure Synapse Analytics, and others,” Col 4, Lines 24-31, and “In such an embodiment, components of the systems described herein may be deployed in or near Snowflake to collect data, transform data, analyze data for the purposes of detecting threats or vulnerabilities, initiate remediation workflows, generate alerts, or perform any of the other functions that can be performed by the systems described herein. In such embodiments, data may be received from a variety of sources (e.g., EDR or EDR-like tools that handle endpoint data, cloud access security broker (‘CASB’) or CASB-like tools that handle data describing interactions with cloud applications, Identity and Access Management (‘IAM’) or IAM-like tools, and many others), normalized for storage in a data warehouse, and such normalized data may be used by the systems described herein,” Col 64, Lines 20-35.
Karaje teaches a data warehouse and also teaches hybrid cloud infrastructure, but does not explicitly disclose that the data warehouse is located on the cloud. However, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje to provide that the data warehouse is located on the cloud. Doing so would help improve accessibility of the data warehouse.);
perform one or more data enrichment processes to (
Karaje discloses, “Converting data 510 (which may already include one or more data streams in various data formats) into data streams 516 may allow data platform 502 to leverage benefits of stream processing and implement features such as file aggregation, data enrichment, real-time analytics, real-time alerts, copying events/messages to multiple destinations for different purposes, etc,” Col 74, Lines 25-32.);
and provide, within the hybrid-cloud infrastructure environment, data access by routing an API call to access the data from a location selected from the group consisting of (i) the at least one public-cloud resource, (ii) the at least one on-premise resource, and (iii) (
Karaje discloses, “For example, API gateway 802 may be configured to interface between cloud resources in cloud environment 512 and data platform 502. Specifically, API gateway 802 may interact with cloud resources that push data 510 to data platform 502. API gateway 802 may be configured to perform API routing, which may include routing requests to instances of API server 806 based on an API call of the requests. Additionally or alternatively, API gateway 802 may be configured to perform load balancing, which may include routing requests to instances of API server 806 based on a load of the API server instance,” Col 77, Lines 24-34.),
Karaje does not teach that the data is a replica of data.
However, Madan teaches that the data is a replica of data (
Madan discloses, “It may occur from time to time that a data consumer requests a data provider that the data provider instantiate a synced replica of one or more databases in a place that is geographically and/or network-topologically closer to the data consumer than the instantiation of that particular data that the data consumer was previously using,” ¶ 0030.).
Karaje and Madan are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje to incorporate the teachings of Madan and provide that the data is a replica of data. Doing so would help provide more availability for the data across different geographic locations (Madan discloses, “It may occur from time to time that a data consumer requests a data provider that the data provider instantiate a synced replica of one or more databases in a place that is geographically and/or network-topologically closer to the data consumer than the instantiation of that particular data that the data consumer was previously using,” ¶ 0030.).
Karaje in view of Madan does not teach wherein the location is selected by using the at least one processor to dynamically determine a most efficient source from the group, wherein the dynamically determining includes analyzing at least one predefined factor.
However, Uluderya teaches wherein the location is selected by using the at least one processor to dynamically determine a most efficient source from the group, wherein the dynamically determining includes analyzing at least one predefined factor (
Uluderya discloses, “Request management servers 104 and 106 may send a request through router 110 to a proper server based on that server's availability, health status, request type, client type and so on. In some embodiments, the routing, throttling, and/or load balancing functionality may be integrated into the router 110 instead of request management servers 104 and 106,” Col 3, Lines 65-67 and Col 4, Lines 1-4, and
“In some embodiments, weighted routing may be implemented. The health aspect may come in the way that the weights are set. A policy engine mechanism may analyze server health data and update the weights correspondingly and dynamically. Thus a system may employ rules or a script to make optimized routing decisions; throttle or prioritize to prevent harmful requests from entering the service and prioritize different request types; and maintain a record of why routing decisions are made as well as the outcome of the decision (success/response time, failure/reason) for optimization and allow automatic and manual customization,” Col 6, Lines 8-18.
Here, a location (proper server) is selected based on efficiency, taking into account factors such as availability, health status, and request type. This can be done dynamically using a policy engine mechanism in order to optimize routing in response to changing conditions of servers.
This aligns with paragraph 83 of the present application’s specification, which states “According to various embodiments, the one or more predefined factors are selected from the group consisting of data type, data state, and data availability.”).
Karaje in view of Madan, and Uluderya are both considered to be analogous to the claimed invention because they are in the same field of server-based computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan to incorporate the teachings of Uluderya and provide wherein the location is selected by using the at least one processor to dynamically determine a most efficient source from the group, wherein the dynamically determining includes analyzing at least one predefined factor. Doing so would help allow for improving throughput for the data transfers (Uluderya discloses, “Request management servers 104 and 106 may send a request through router 110 to a proper server based on that server's availability, health status, request type, client type and so on. In some embodiments, the routing, throttling, and/or load balancing functionality may be integrated into the router 110 instead of request management servers 104 and 106,” Col 3, Lines 65-67 and Col 4, Lines 1-4.).
Claims 10 and 18 are a computing system claim and computer-implemented method claim, respectively, corresponding to the hybrid cloud infrastructure environment Claim 1. Therefore, Claims 10 and 18 are rejected for the same reason set forth in the rejection of Claim 1. In addition, Claim 10 recites “A computing system for data process management, the computing system comprising: at least one processor; a communication interface communicatively coupled to the at least one processor; and a memory device storing executable code” (Karaje discloses, “The embodiments described herein can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor,” Col 6, Lines 44-50, and “As shown in FIG. 1C, computing device 50 may include a communication interface 52, a processor 54, a storage device 56, and an input/output (“I/O”) module 58 communicatively connected one to another via a communication infrastructure 60,” Col 7, Lines 26-30.); and Claim 18 recites, “A computer-implemented method for data process management” (Abstract of Karaje.).
Claims 2, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), and Mottley (US 20230138900 A1).
Regarding Claim 2, Karaje in view of Madan and Uluderya teaches the hybrid cloud infrastructure environment of claim 1. Karaje in view of Madan and Uluderya does not teach wherein the hybrid-cloud infrastructure environment comprises a multi-region environment comprising read replica databases at a plurality of geographic locations, the read replica databases facilitating deployment of respective read-only database instances to the plurality of geographic locations.
However, Mottley teaches wherein the hybrid-cloud infrastructure environment comprises a multi-region environment comprising read replica databases at a plurality of geographic locations, the read replica databases facilitating deployment of respective read-only database instances to the plurality of geographic locations (
Mottley discloses, “Multi-server cloud environments may be used to split traffic between distinct geographic regions and to provide a level of redundancy and increased throughput for distributed applications in case one copy goes offline due to a hardware failure or otherwise… A read-replica cloud environment can be understood as a distributed application that operates fully on a first distributed cloud server, allowing data to both be read and written from the first distributed cloud server, while additionally deploying a “read-only” copy of the respective distributed application on a second distributed cloud server that is linked to the first distributed cloud server… The cloud environment management system 120 may then query the distributed application status repository 130 (and/or database 260) to look up the cloud environment status (e.g., standard, read replica, and/or multi-server) of each respective distributed cloud application,” ¶ 0042.).
Karaje in view of Madan and Uluderya, and Mottley are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Mottley and provide wherein the hybrid-cloud infrastructure environment comprises a multi-region environment comprising read replica databases at a plurality of geographic locations, the read replica databases facilitating deployment of respective read-only database instances to the plurality of geographic locations. Doing so would help allow distributing applications across the multi-region environment, and/or to enhance data availability and security. (Mottley discloses, “The advent of cloud computing allows for businesses to execute distributed applications on virtual instances over a network (e.g., the Internet). Cloud computing may be utilized to run applications remotely using computer systems, such as a server or collection of servers to provision virtual machines used to execute an application (such as a distributed application) over a network,” ¶ 0002.).
Claims 11 and 19 are a computing system claim and computer-implemented method claim, respectively, corresponding to the hybrid cloud infrastructure environment Claim 2. Therefore, Claims 11 and 19 are rejected for the same reason set forth in the rejection of Claim 2.
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), and Chandrashekar (US 20160188427 A1).
Regarding Claim 3, Karaje in view of Madan and Uluderya teaches the hybrid cloud infrastructure environment of claim 1. Karaje in view of Madan and Uluderya does not teach wherein the hybrid-cloud infrastructure environment comprises a primary database instance comprising a read/write instance.
However, Chandrashekar teaches wherein the hybrid-cloud infrastructure environment comprises a primary database instance comprising a read/write instance (
Chandrashekar discloses, “According to an implementation, each of the application nodes 204 connect to a single primary database 310.1, regardless of whether the database 310.1 is located in the same datacenter 110.1 as the application nodes 204.1 or not. For example, a primary database 310.1 may be read/write and a secondary database 310.2 may be configured to be read-only such that it mirrors changes from the primary database,” ¶ 0056.).
Karaje in view of Madan and Uluderya, and Chandrashekar are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Chandrashekar and provide wherein the hybrid-cloud infrastructure environment comprises a primary database instance comprising a read/write instance. Doing so would help allow the primary database to perform both read and write operations, and/or increase data security and access efficiency. (Chandrashekar discloses, “For example, a primary database 310.1 may be read/write and a secondary database 310.2 may be configured to be read-only such that it mirrors changes from the primary database,” ¶ 0056.).
Claim 12 is a computing system claim corresponding to the hybrid cloud infrastructure environment Claim 3. Therefore, Claims 12 is rejected for the same reason set forth in the rejection of Claim 3.
Claims 4-5, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), and Tamjidi (US 20190332451 A1).
Regarding Claim 4, Karaje in view of Madan and Uluderya teaches the hybrid cloud infrastructure environment of claim 1. Karaje in view of Madan and Uluderya does not teach wherein the replica of the data from the at least one data provider is received via a data receiving process at a batch file transfer broker to facilitate the extracting to the temporary location.
However, Tamjidi teaches wherein the replica of the data from the at least one data provider is received via a data receiving process at a batch file transfer broker to facilitate the extracting to the temporary location (
Tamjidi discloses, “The embodiment of the process 130 illustrated in FIG. 6 begins with the batch request engine 116 receiving (block 132) the batch REST request 104 from the client device 14D. The batch REST request 104 can include any suitable number of requested items 106 (e.g., requested items 106A, 106B, and 106C). For example, in certain embodiments, a maximum number of requested items allowed in the batch REST request 104 may be defined by a particular system property (e.g., “glide.rest.batch.maxRequestItems”), which may have an adjustable default value (e.g., 1000)… When the batch REST request 104 is suitably validated to proceed, the batch request engine 116 extracts (block 134) the requested items 106 from the batch REST request104,” ¶ 0044.
The claimed “batch file transfer broker” is mapped to the disclosed batch request engine that performs extract, transfer and load (ETL) functions. This is a batch file transfer broker because it performs batch processing on large volumes of data during extraction and transferring of the data.
After the combination of Karaje in view of Madan and Uluderya with Tamjidi, the replica of the data from the at least one data provider from Karaje in view of Madan and Uluderya is received using batch processing from the batch request engine from Tamjidi in order to facilitate extraction.).
Karaje in view of Madan and Uluderya, and Tamjidi are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Tamjidi and provide wherein the replica of the data from the at least one data provider is received via a data receiving process at a batch file transfer broker to facilitate the extracting to the temporary location. Doing so would help improve the efficiency of transferring data using batch transfers. (Tamjidi discloses, “As such, a batch REST API is presently disclosed that enables multiple REST requests to be combined into a batch request that can more efficiently be communicated to a REST server for processing,” ¶ 0022.).
Claim 14 is a computing system claim corresponding to the hybrid cloud infrastructure environment Claim 4. Therefore, Claim 14 is rejected for the same reason set forth in the rejection of Claim 4.
Regarding Claim 5, Karaje in view of Madan, Uluderya and Tamjidi teaches the hybrid cloud infrastructure environment of claim 4, wherein the batch file transfer broker is configured to receive batch file extracts from the at least one data provider to facilitate the extracting of the batch file extracts (
Tamjidi discloses, “The embodiment of the process 130 illustrated in FIG. 6 begins with the batch request engine 116 receiving (block 132) the batch REST request 104 from the client device 14D… When the batch REST request 104 is suitably validated to proceed, the batch request engine 116 extracts (block 134) the requested items 106 from the batch REST request104,” ¶ 0044.
After the combination of Karaje in view of Madan and Uluderya with Tamjidi, the at least one data provider from Karaje in view of Madan and Uluderya sends batch file extracts to the batch request engine from Tamjidi in order to facilitate extraction.).
Karaje in view of Madan and Uluderya, and Tamjidi are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Tamjidi and provide wherein the batch file transfer broker is configured to receive batch file extracts from the at least one data provider to facilitate the extracting of the batch file extracts. Doing so would help improve the efficiency of transferring data using batch transfers. (Tamjidi discloses, “As such, a batch REST API is presently disclosed that enables multiple REST requests to be combined into a batch request that can more efficiently be communicated to a REST server for processing,” ¶ 0022.).
Claim 15 is a computing system claim corresponding to the hybrid cloud infrastructure environment Claim 5. Therefore, Claim 15 is rejected for the same reason set forth in the rejection of Claim 5.
Claims 6-7 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), and Johnston (US 11080294 B1).
Regarding Claim 6, Karaje in view of Madan and Uluderya teaches the hybrid cloud infrastructure environment of claim 1. Karaje in view of Madan and Uluderya does not teach wherein the replica of the data from the at least one data provider is received via a streaming event queue to facilitate the extracting to the temporary location.
However, Johnston teaches wherein the replica of the data from the at least one data provider is received via a streaming event queue to facilitate the extracting to the temporary location (
Johnston discloses, “The architecture may be responsible for the secure receipt of event data (which may utilize TLS encryption over HTTP), insertion into a streaming data event queue 110, validation, parsing and processing of the data by containerized serverless functions 100, and persistence of the processed events into a distributed, NoSQL data store cluster 180,” Col 4, Lines 31-37.
After the combination of Karaje in view of Madan and Uluderya with Johnston, the replica of data from the at least one data provider from Karaje in view of Madan and Uluderya is received using Johnston’s streaming data event queue to facilitate extraction.).
Karaje in view of Madan and Uluderya, and Johnston are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Johnston and provide wherein the replica of the data from the at least one data provider is received via a streaming event queue to facilitate the extracting to the temporary location. Doing so would help improve efficiency and scalability due to using the streaming event queue. (Johnston discloses, “As events are inserted into the streaming data queue 110, the parser functions 100 may be automatically triggered. The functions may extract batches of events 120 from the queue 110. The size of the batches 120 may be pre-defined on a per-queue basis. Each function may be massively scalable and multiple instances of a single function may be invoked in parallel to efficiently process events as they are inserted into the streaming event queue,” Col 4, Lines 60-67.).
Claim 16 is a computing system claim corresponding to the hybrid cloud infrastructure environment Claim 6. Therefore, Claim 16 is rejected for the same reason set forth in the rejection of Claim 6.
Regarding Claim 7, Karaje in view of Madan, Uluderya and Johnston teaches the hybrid cloud infrastructure environment of claim 6, wherein the streaming event queue is configured to receive encrypted files from the data providers and securely store the encrypted files to facilitate extracting the encrypted files to the temporary location (
Johnston discloses, “Events transmitted to the queues may be encrypted using Transport Layer Security (TLS),” Col 4, Lines 49-51.
After the combination of Karaje in view of Madan and Uluderya with Johnston, the files from the data providers from Karaje in view of Madan and Uluderya are encrypted via Transport Layer Security before being sent to the queue from Johnston.).
Karaje in view of Madan and Uluderya, and Johnston are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Johnston and provide wherein the streaming event queue is configured to receive encrypted files from the data providers and securely store the encrypted files to facilitate extracting the encrypted files to the temporary location. Doing so would help enhance data security of the overall system.
Claim 17 is a computing system claim corresponding to the hybrid cloud infrastructure environment Claim 7. Therefore, Claim 17 is rejected for the same reason set forth in the rejection of Claim 7.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), and Sundaram (US 20200372576 A1).
Regarding Claim 8, Karaje in view of Madan and Uluderya teaches the hybrid cloud infrastructure environment of claim 1. Karaje in view of Madan and Uluderya does not teach wherein the temporary location comprises a data ingestion and transformation engine, wherein the data ingestion and transformation engine performs the transforming of the extracted data using a plurality of rules stored to a data provider transform rules database, wherein the transforming comprises data decryption, data transformation, and data re-encryption of the extracted data.
However, Sundaram teaches wherein the temporary location comprises a data ingestion and transformation engine (
PNG
media_image1.png
686
471
media_image1.png
Greyscale
Sundaram discloses, “For example, such eligibility decisions may be based on Boolean logic or machine-learning logic as mentioned, which the lender is able to input into the Lender Portal 109 in the form of rule sets, in addition to the type of criteria being assessed… In an embodiment, the lender 120, as shown in FIG. 1, is able to input information into the lender portal 109 in the form of rule sets and executable instructions which may be lender-specific and non-standardized. Then, the self-contained lender confidential data service 108e in the vault 108 may run on these rule sets and executable instructions, to convert them autonomously to standardized instructions which can be parsed by the microservices 108a-108c of the lender-specific broker 114. These instructions in turn may be encrypted with a lender-specific key, where there is further a different lender-specific key for each lender specific microprocess, and stored in the Lender Confidential data repository 108f. The repository 108f in turn may be a data structure such as a database,” ¶ 0026.
The claimed “data ingestion and transformation engine” is mapped to the disclosed vault 108 that contains the “lender confidential data service 108e” which takes in rule sets and executable instructions as input (ingestion), and outputs standardized instructions which are then parsed by “microservices 108a-108c”, which are also stored in vault 108. These standardized instructions are then used by “lender specific broker 114” and “encryption service 115”, which are both also stored in vault 108, to decrypt, re-transform (transformation), and then re-encrypt data. This is further illustrated in the above FIG. 1.),
wherein the data ingestion and transformation engine performs the transforming of the extracted data using a plurality of rules stored to a data provider transform rules database (
Sundaram discloses, “These instructions in turn may be encrypted with a lender-specific key, where there is further a different lender-specific key for each lender specific microprocess, and stored in the Lender Confidential data repository 108f. The repository 108f in turn may be a data structure such as a database,” ¶ 0026, and “At step 713, the lender specific broker 114, through the prequalification microservice 108a, using the lender-specific key 601a, retrieves lender encrypted rules and/or executable logic not to perform the prequalification eligibility analysis, as in the previous embodiment, but to transform the parameters of the prequalification request so that it may be inputted to the third party API 111 to be executed by the third party LOS 111a for performing prequalification eligibility analysis,” ¶ 0056.
The claimed “data provider transform rules database” is mapped to the disclosed repository 108f that contains the standardized instructions that are generated from both the rules and executable instructions in the previous step.
The disclosed “lender encrypted rules and/or executable logic”, stored in the “repository 108f”, are used to transform data for inputting into a third party API 111 to receive a response from third party LOS 111a, as seen in the next step.),
wherein the transforming comprises data decryption, data transformation, and data re-encryption of the extracted data (
Sundaram discloses, “This response may be encrypted with a lender-specific key (e.g. 601a) that the lender specific broker 114 has access to, in order to decrypt upon receipt by the lender specific broker 114. At step 715, using the lender-specific key 601a the lender specific broker 114 then decrypts the lender LOS 111a response and re-transforms the parameters to match the universal format of the Multi-Lender architecture (e.g. where it matches parameter-wise the output in step 711 of the other embodiment). Finally at step 715, the output response after having its parameters re-transformed to match the universal format, is re-encrypted, using the encryption service 115 in FIG. 1, into one of the lender-agnostic universal formats described above,” ¶ 0056.).
Karaje in view of Madan and Uluderya, and Sundaram are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Sundaram and wherein the temporary location comprises a data ingestion and transformation engine, wherein the data ingestion and transformation engine performs the transforming of the extracted data using a plurality of rules stored to a data provider transform rules database, wherein the transforming comprises data decryption, data transformation, and data re-encryption of the extracted data. Doing so would help improve the security of the overall system (Sundaram discloses, “As a result, because such data may not be stored in the vault, and the outputs of such data are only visible to the user, the entire application is processed in an end-to-end secure manner,” ¶ 0074.).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), and Sethu (US 20230072123 A1).
Regarding Claim 9, Karaje in view of Madan and Uluderya teaches the hybrid cloud infrastructure environment of claim 1. Karaje in view of Madan and Uluderya does not teach wherein the executable code further causes the at least one processor to create a structured log of all the transformed data.
However, Sethu teaches wherein the executable code further causes the at least one processor to create a structured log of all the transformed data (
Sethu discloses, “The junk data from the log data file may be cleaned to generate a clean log data file. In an embodiment, the clean log data file may be generated by clustering each of a set of similar data files from the set of data files in one the plurality of data formats. Further, the data pre-processing module 204 may be configured to structure the clean log data file to generate a structured log data file,” ¶ 0041.
Karaje in view of Madan and Uluderya already teaches the transformed data, and after the combination of Karaje in view of Madan and Uluderya, with Sethu, the structured log data file from Sethu is generated from the transformed data from Karaje in view of Madan and Uluderya.).
Karaje in view of Madan and Uluderya, and Sethu are both considered to be analogous to the claimed invention because they are in the same field of server-based computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Sethu and provide wherein the executable code further causes the at least one processor to create a structured log of all the transformed data. Doing so would help provide verbose information to the user for debugging in case of an error (Sethu discloses, “Currently, developers use grep tools to search for log data with errors present in the log data files, filter log data for specific date and time across multiple log data files and then manually analyze the log data files to identify the errors. Once, the errors are identified and the root cause behind the errors are determined, then the determined root cause are further analyzed in order to fix the errors,” ¶ 0003).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), Mottley (US 20230138900 A1), and Sathyanarayana (US 20150301910 A1).
Regarding Claim 13, Karaje in view of Madan and Uluderya teaches the computing system of claim 10. Karaje in view of Madan and Uluderya does not teach wherein a nearest read replica database is determined based on geographic location of the read replica databases at the plurality of locations.
However, Mottley teaches (
Mottley discloses, “Multi-server cloud environments may be used to split traffic between distinct geographic regions and to provide a level of redundancy and increased throughput for distributed applications in case one copy goes offline due to a hardware failure or otherwise… A read-replica cloud environment can be understood as a distributed application that operates fully on a first distributed cloud server, allowing data to both be read and written from the first distributed cloud server, while additionally deploying a “read-only” copy of the respective distributed application on a second distributed cloud server that is linked to the first distributed cloud server… The cloud environment management system 120 may then query the distributed application status repository 130 (and/or database 260) to look up the cloud environment status (e.g., standard, read replica, and/or multi-server) of each respective distributed cloud application,” ¶ 0042.).
Karaje in view of Madan and Uluderya, and Mottley are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan and Uluderya to incorporate the teachings of Mottley and provide the read replica databases at the plurality of locations. Doing so would help allow distributing applications across the multi-region environment, and/or to enhance data availability and security. (Mottley discloses, “The advent of cloud computing allows for businesses to execute distributed applications on virtual instances over a network (e.g., the Internet). Cloud computing may be utilized to run applications remotely using computer systems, such as a server or collection of servers to provision virtual machines used to execute an application (such as a distributed application) over a network,” ¶ 0002.).
Karaje in view of Madan, Uluderya and Mottley does not teach wherein a nearest read replica database is determined based on geographic location of the read replica databases.
However, Sathyanarayana teaches wherein a nearest read replica database is determined based on geographic location of the read replica databases (
Sathyanarayana discloses, “In an embodiment, a standby database server which is geographically closest to primary database server may be selected from the standby database servers…,” ¶ 0024.).
Karaje in view of Madan, Uluderya, and Mottley, and Sathyanarayana are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan, Uluderva, and Mottley to incorporate the teachings of Sathyanarayana and provide wherein a nearest read replica database is determined based on geographic location of the read replica databases at the plurality of locations. Doing so would help allow for efficient and fast access of each of the databases in different regions.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Karaje (US 12095794 B1) in view of Madan (US 20240346038 A1), Uluderya (US 8539080 B1), Mottley (US 20230138900 A1), and Chandrashekar (US 20160188427 A1).
Regarding Claim 20, Karaje in view of Madan, Uluderva, and Mottley teaches the computer-implemented method of claim 19. Karaje in view of Madan, Uluderva, and Mottley does not teach wherein the hybrid-cloud infrastructure environment comprises a primary database instance comprising a read/write instance.
However, Chandrashekar teaches wherein the hybrid-cloud infrastructure environment comprises a primary database instance comprising a read/write instance (
Chandrashekar discloses, “According to an implementation, each of the application nodes 204 connect to a single primary database 310.1, regardless of whether the database 310.1 is located in the same datacenter 110.1 as the application nodes 204.1 or not. For example, a primary database 310.1 may be read/write and a secondary database 310.2 may be configured to be read-only such that it mirrors changes from the primary database,” ¶ 0056.).
Karaje in view of Madan, Uluderva, and Mottley, and Chandrashekar are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Karaje in view of Madan, Uluderva, and Mottley to incorporate the teachings of Chandrashekar and provide wherein the hybrid-cloud infrastructure environment comprises a primary database instance comprising a read/write instance. Doing so would help allow the primary database to perform both read and write operations. (Chandrashekar discloses, “For example, a primary database 310.1 may be read/write and a secondary database 310.2 may be configured to be read-only such that it mirrors changes from the primary database,” ¶ 0056.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Poort et al. (US 20230185608 A1): Compute Recommendation Engine
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW SUN whose telephone number is (571)272-6735. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW NMN SUN/Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195