Prosecution Insights
Last updated: April 19, 2026
Application No. 18/449,973

CONFIDENTIAL COMPUTING TECHNIQUES FOR DATA CLEAN ROOMS

Non-Final OA §103
Filed
Aug 15, 2023
Examiner
GUTMAN, JENNIFER MARIE
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Habu Inc.
OA Round
1 (Non-Final)
59%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
17 granted / 29 resolved
+3.6% vs TC avg
Strong +50% interview lift
Without
With
+50.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references cited in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Specification The disclosure is objected to because of the following informalities: “[Paragraphs of the specification] should be individually and consecutively numbered using Arabic numerals, so as to unambiguously identify each paragraph. The number should consist of at least four numerals enclosed in square brackets including leading zeros.” (see 37 C.F.R. 1.52(b)(6) and MPEP 608.01). The paragraphs of the Specification are not consecutively numbered. Additionally, the Specification contains multiple paragraphs numbered [0001]-[0009]. Specifically, on pages 7-10 of the Specification, there are a set of paragraphs numbered [0001]-[0009] between the paragraphs numbered [0031] and [0032]. Appropriate correction is required. The use of the terms Amazon Web Services, Microsoft Azure, Google Cloud Platform, Snowflake, MySQL, PostgreSQL, Amazon S3, Apache Spark, Docker, Kubernetes, and JavaScript which are trade names or marks used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 5-6, 11, 13, 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ et al. (U.S. Pub. No. 2019/0362083), hereinafter ORTIZ, in view of KO et al. (U.S. Pub. No. 2023/0131060), hereinafter KO, and NPL Document: “Introduction to JSON Web Tokens” published by Auth0, hereinafter referred to as Auth0. Regarding claim 1, ORTIZ teaches A method, comprising: receiving, by a data clean room orchestration system (e.g., resource manager 1100 and/or application manager 1124), an indication of mutually attested code for a data clean room between two or more partners ([0067]-[0068] – “the platform may interface with participating partners (e.g., banks and merchants) to receive, from a respective system of each partner, consumer data including transaction data […] The received consumer data from the partners cannot be accessed, decrypted or read by any other user, system or process except by the Clean Room for the stipulated purpose, i.e., for the purpose of running the analytics and generating the offers. This platform enables the execution of analytics on encrypted data”; [0087] – “A secure enclave 133 may be configured to store an encrypted dataset in a single secure enclave and execute one or more analytics algorithms”; [0149]-[0150] – “resource manager 1100 can verify that the client system is an authorized party to Clean Room 300. Once the client system has been verified as an authorized party, resource manager 1100 may transmit, and display at the client system, one or more data analytics to which the client system has access. The client system may elect one or more options from the displayed data analytics options. […] The client system may then send the complete data query to resource manager 1100. Resource manager 1100 may receive the data query from the client system, and proceed to send the query to application manager 1124 in order to launch the data analytics based on the data query from the client system.”; [0151] – “one or more data analytic operations may be open for inspection and/or signed by all authorized parties participating in Clean Room 300 to assure the authorized parties that the Clean Room is secure and intact.”; [0155] – “Similar to a data query 118 from the system illustrated in FIG. 14, a query request 118 from a client system 119 may be sent to the Resource Manager 1100. The query request 118 may include a query for data analytics.” The received query indicates data analytics to be performed in Clean Room 300, where the data analytics operations are inspected and signed (i.e., attested) by all parties of the Clean Room. For clarity of the record, the Examiner would like to point to paragraph [0032] of the Specification, which recites “mutually attested code 160 (such as a program that runs various pre-approved queries on partner datasets)”. The data analytic operations which have been inspected and signed by all parties participating the Clean Room and are run on the consumer data from the participating partners, is thus analogous to the “mutually attested code”.); configuring, by the data clean room orchestration system, a trusted execution environment for the data clean room between the two or more partners ([0067]-[0069] – “A secure platform for processing private consumer data, such as transaction data, is described herein. In some embodiments, the platform may interface with participating partners (e.g., banks and merchants) to receive, from a respective system of each partner, consumer data including transaction data (also referred to as "TXN data"). […] The platform may store the received consumer data in a secure area (also referred to as the "Clean Room"), where the consumer data is then decrypted and analyzed to generate personalized offers for each consumer. The received consumer data from the partners cannot be accessed, decrypted or read by any other user, system or process except by the Clean Room for the stipulated purpose, i.e., for the purpose of running the analytics and generating the offers. […] the Clean Room is implemented within one or more secure enclaves within a Trusted Execution Environment (TEE) of a processor (e.g., a CPU)”; [0150] – “Resource manager 1100 may receive the data query from the client system, and proceed to send the query to application manager 1124 in order to launch the data analytics based on the data query from the client system. Application manager 1124 may be an application configured to generate one or more enclaves 133a, 133b, 133n in order to run analytics on the encrypted data using the enclaves. In some embodiments, one or more worker nodes may be used to perform the required data analytics.”; [0209] – “Once a job request is received by resource manager 1100, for example from client system 119 (e.g. a partner system 115 or a different component in platform 100), the resource manager 1100 may send a resource request to a node manager 1121 within a container 1120, which may generate an application master 1123. Next, application master 1123 may return the container request back to resource manager 1100 and spawns worker nodes 132a, 132b within separate containers 1130, 1150 to perform tasks.”; [0196]-[0198] – “The application master requests a resource manager (see e.g. FIG. 11A) to spawn worker containers to perform a chunk of the analytics.”), the trusted execution environment comprising one or more virtual machines (VMs) that are individually or collectively operable to execute the mutually attested code ([0150]-[0151] – “one or more worker nodes may be used to perform the required data analytics. In some embodiments, one or more data analytic operations may be open for inspection and/or signed by all authorized parties participating in Clean Room 300 to assure the authorized parties that the Clean Room is secure and intact.”; [0196] – “worker applications (which may also be referred to as worker nodes or workers)”; [0199] – “A worker application performs data analytics. There may be multiple instances of worker applications. Each worker maybe within a secure enclave, or contains a secure enclave, and the enclave may receive the input file and the interval it should process. The analytics are carried out inside secure enclaves”; [0218] – “A YARN container, which may act as a worker application or worker daeman, may coordinate resource allocation on one machine. Each YARN container may include an executor 1127a, 1127b, which can execute Spark tasks or applications. Generally speaking, an executor may be an implemented process launched for an application on a worker node or a YARN container, that runs tasks and keeps data in memory or disk storage across them. Each application may have its own executors.”; [0229]-[0230] – “as Clean Room 300 receives an amount of encrypted data, it may distribute the data to an application master 1127 for data analytics. Driver application 1125 may receive the encrypted data and transmit the data to one or more executors 1127a, 1127b to perform one or more data analytic tasks. […] The executor 1127 may have three components, including: untrusted environment, such as untrusted Java virtual machine (JVM) 1128, trusted environment such as trusted JVM 1129, and a shared memory 2003.”); obtaining, by the one or more VMs in the trusted execution environment configured by the data clean room orchestration system, two or more partner datasets encrypted with respective secret keys of the two or more partners ([0229]-[0230] – “as Clean Room 300 receives an amount of encrypted data, it may distribute the data to an application master 1127 for data analytics. Driver application 1125 may receive the encrypted data and transmit the data to one or more executors 1127a, 1127b to perform one or more data analytic tasks. […] The executor 1127 may have three components, including: untrusted environment, such as untrusted Java virtual machine (JVM) 1128, trusted environment such as trusted JVMM 1129, and a shared memory 2003.”; [0238] – “The encrypted data may be sent from driver 1125 in encrypted form, and get sent to trusted JVM 1129 in encrypted form through shared memory communication 2003. Once encrypted data arrives at trusted JVM 1129, it may be decrypted and stored in a data storage 2008 for decrypted data, and subsequently may be processed by isolated task runner 2007 within trusted JVM 1129.”; [0067]-[0068] – “the platform may interface with participating partners (e.g., banks and merchants) to receive, from a respective system of each partner, consumer data including transaction data (also referred to as "TXN data"). The consumer data may be encrypted with an encryption key. The platform may store the received consumer data in a secure area (also referred to as the "Clean Room")”; [0139] – “encrypted data may be transmitted from partner system 115 to platform 100 using the secure communication channel.”; [0144] – “Once partner portal 116 receives the information representative of data amount, destination enclave(s) and public key(s) from Data Manager 134, partner portal 116 may proceed to encrypting the raw data. For example, partner portal 116 may randomly generate a 256 bit Data Encryption Key (DEK) for each destination enclave and encrypts some raw data with the respective DEKs […] Next, partner portal 116 may send the encrypted data along with the encrypted key (e.g. encrypted DEK) to Data Manager 134 via communication channel 215.”; Claim 11 – “receiving one or more data sets from one or more corresponding partner computing devices, each of the one or more data sets digitally signed by a private key corresponding to each of the partner computing devices”); transmitting, to endpoints associated with the two or more partners ([0072] – partner systems 115 communicate with platform 100 over a network, thus there necessarily must be endpoints, i.e., devices that connect to and exchange information with the network, in the partner systems), an attestation report comprising at least […] a host public key of a host machine associated with the one or more VMs ([0120] – “A Remote Attestation mechanism may be used to authenticate and establish a secure communication channel, whereby a remote client (e.g. a partner system 115) may ensure they are communicating with a particular piece of code running in enclave mode on an authentic non-compromised processor of platform 100. Remote Attestation can also be used by the enclave to send a public key to the client in a non-malleable way.”; [0124] – “Enclave to present a Remote Attestation Transcript. […] A thus transformed protocol can be carried out by the untrusted enclave host itself. The enclave authenticates by presenting this protocol transcript similar to a public key certificate and signing a challenge and its new Diffie Hellman message by the public key embedded in the Remote Attestation transcript.”; [0130] – “a remote attestation process between a partner system 115 and a trust manager utility 127 of Security and Encryption unit 125. At step 410, a Certificate Manager utility 128 can issue a Public Key Certificate 129 for each partner.”; [0141] – “Clean Room 300 may include a Data Manager 134 configured to send public key of one or more enclaves to a partner portal 116 for encryption of data at the partner portal.”; [0245] – “secure enclaves (e.g., isolated data processors, either hardware or software, or combinations thereof)”; [0199], [0218], [0229]-[0230] – workers, comprising the one or more VMs, are within secure enclaves); receiving, by the one or more VMs, the respective secret keys wrapped with the host public key of the host machine ([0144] – “partner portal 116 may randomly generate a 256 bit Data Encryption Key (DEK) for each destination enclave and encrypts some raw data with the respective DEKs […] A different DEK may be generated for each destination enclave, and thus for each public key associated with the destination enclave. Partner portal 116 may then encrypt each of the DEKs using an appropriate public key based on the corresponding destination enclave for which the DEK is generated. Next, partner portal 116 may send the encrypted data along with the encrypted key (e.g. encrypted DEK) to Data Manager 134 via communication channel 215.”; [0229]-[0230] and [0238] – encrypted data received at the Clean Room 300, including each DEK encrypted (i.e., wrapped) with the corresponding host key of the destination secure enclave as disclosed in [0144], is sent to the trusted JVMs before decryption and analysis); and executing the mutually attested code on the two or more partner datasets in the trusted execution environment based at least in part on using a host private key of the host machine to unwrap the respective secret keys ([0088]-[0089] – “a secure enclave 133 can provide secure storage of sensitive data from partner systems 133, and is the only component on platform 100 capable of decrypting the encrypted data with an appropriate and secure key management system. A secure enclave 133 can also be implemented to execute analytics on the decrypted data and provide output. […] Analytics may be executed on the encrypted data by worker applications. The worker application may decrypt the data using an appropriate decryption key prior to executing said analytics.”; [0151] – analytic operations performed in the Clean Room may be signed by all parties participating in the Clean Room, i.e., the analytic operations are “mutually attested” code; [0139] – “a public-private key pair may be used to encrypt the data. […] When Clear Room 300 receives the encrypted data through the communication channel, a corresponding private key may be used to decrypt the data, so that they may be cleaned, normalized and processed accordingly.”; [0144] – received encrypted data includes raw data encrypted with a DEK and the DEK encrypted (wrapped) with the public key, thus decrypting the received encrypted data with the corresponding private key would include decrypting (unwrapping) the DEK; [0189] – “Received consumer data may be decrypted and stored as raw data, which may be then normalized and stored as normalized data. Normalize data may be sent to a model trainer within the secure enclave 133 and further processed by the model trainer. Since different partners may have different data sets, the data sets need to be normalized prior to being aggregated and analyzed.”; [0277] – “At step 1350, platform 100 may decrypt and analyze the encrypted data to generate recommendations based on the decrypted data.”). ORTIZ fails to expressly teach the attestation report comprising an encrypted token. However, KO teaches an transmitting attestation report comprising a […] token and a host public key ([0068] – “a signed digital certificate is generated that includes a public key from a key pair that is generated by the secure enclave, and a secure quote that is generated in the encrypted memory and that includes an identifier of the secure enclave and a hash value of the public key.”; [0085] – “an API call is placed that includes the signed digital certificate and the attestation token to a data storage that persists the confidential data in a first encrypted state, the API call establishing a secure communication session between the data storage and the secure enclave based at least on the signed digital certificate included therewith. […] The signed digital certificate and the attestation token included in the API call enable resource system host 230 to validate that the API call are from expected, trusted originator.” The data included in the API call (attestation token and self-signed certificate including the public key) is analogous to the claimed attestation report.). ORTIZ and KO are considered to be analogous art to the claimed invention because they are both pertinent to the problem faced by the inventor of verifying the trustworthiness of an environment where an application is executed. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the remote attestation report transmitted to the partners in order to establish a secure communication channel as taught by ORTIZ to include both an attestation token and the host public key as taught by KO. Using both attestation tokens from a trusted attestation service as well as keys produced by the secure enclave provides for increased and more robust security (KO: [0031]), and specifically providing the attestation token enables the recipient to validate the secure enclave as a trusted environment with the trusted token provider who issued the token (KO: [0003] and [0040]). ORTIZ in view of KO fails to expressly teach the token is an encrypted token. However, Auth0 teaches using an encrypted token (Page 1 – “JSON Web Token (JWT) is an open standard (RFC 7519 […]) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA. Although JWTs can be encrypted to also provide secrecy between parties, we will-focus on signed tokens. Signed tokens can verify the integrity of the claims contained within it, while encrypted tokens hide those claims from other parties.”; Page 4 – “Do note that for signed tokens this information, though protected against tampering, is readable by anyone. Do not put secret information in the payload or header elements of a JWT unless it is encrypted.”). Auth0 is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of ensuring privacy of data from other parties. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the attestation token as taught by ORTIZ in view of KO to also be encrypted as taught by Auth0. Encrypting a token, such as a JWT used for information exchange between parties, and/or encrypting data within the token, improves security of information as encryption ensures private or secret data is hidden from other parties (Auth0: Pages 1 and 4). Regarding claim 3, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1. KO teaches the method further comprising: generating, by the host machine, a self-signed certificate and a remote attestation report comprising a hash value associated with the self-signed certificate ([0036] – “Computing system 104, via its enclave, may generate a TLS certificate and a secure quote containing the hashed public key of the certificate,”; [0055] – “Enclave engine 208 includes a certificate generator 210, a quote generator 212, a cryptographic engine 214, and a token manager 216, in the illustrated embodiment. Certificate generator 210 is configured to generate self-signed digital certificates described herein that serve as credentials of trust for computing device 202 in its capacity as a secure enclave. In embodiments, a self-signed digital certificate generated by certificate generator 210 includes a secure quote and a public key of the key pair that includes the private key stored in encrypted memory 218. Quote generator 212 is configured to generate secure quotes described herein in encrypted memory 218. Secure quotes generated by quote generator 212 may include an identifier that identifies the secure enclave in which it is generated as well as a hash value of the public key of the secure enclave.”; The hash is “associated with the self-signed certificate” since it is a hash of the public key included in the certificate.); and transmitting, to an attestation endpoint of an attestation service ([0034] – “external computing system 102, computing system 104, token provider 110, and resource system may be enabled to communicate with each other over a network 114”; [0036] – “Token provider 110 (as an attestation service)”. Computing system 104 communicates with token provider 110 over a network, thus there necessarily must be endpoints, i.e., devices that connect to and exchange information with the network, at the token provider.), an attestation request comprising the self-signed certificate and the remote attestation report ([0036] – “Computing system 104, via its enclave may generate a TLS certificate and a secure quote containing the hashed public key of the certificate, and may provide the secure quote to token provider 110 to obtain the attestation token containing the secure quote data. […] Token provider 110 (as an attestation service) may generate and provide the attestation token when requested by computing system 104 via its enclave,”; [0049] – “Token generator 226 may be configured to generate tokens such as attestation tokens based at least on a hashed value of a public key provided in a secure quote and/or a self-signed digital certificate received from a secure enclave.”; [0055] – “Token manager 216 may be configured to request tokens, such as attestation tokens, from token provider 224”; [0071] – “In step 408, a token request is provided from API service 220,e.g., based on a command from token manager 216 shown in FIG. 2, to token provider 224. […] the token request in step 408 is provided, and may include the secure quote and the signed digital certificate generated by enclave engine 208”). It would have been obvious to one of ordinary skill to have modified the remote attestation between the partner systems and the platform implementing the trusted execution environment taught by ORTIZ to incorporate the attestation methods of KO. The methods of KO provides for increased and more robust security for sensitive data (KO: [0031]), and specifically using the attestation token obtained from a token provider (“attestation service”) based on the self-signed certificate enables the recipient to validate the secure enclave as a trusted environment with the trusted token provider who issued the token (KO: [0003] and [0040]). Regarding claim 5, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1. ORTIZ teaches the method further comprising: establishing respective transport layer security (TLS) connections between the host machine and the endpoints associated with the two or more partners, wherein the respective secret keys are received via the respective TLS connections ([0138] – “Once partner system 115 is satisfied, based on the Signed Response, that the Clear Room 300 running on platform 100 is authentic and trustworthy, a SSL/TLS handshake may occur at step 430 in order to establish a secure communication channel”; [0142] – “a partner portal 116 may initiate a communication channel 215 thru TLS or VPN with Data Manager 134 for sending data to Clean Room 300.”; [0144] – “partner portal 116 may send the encrypted data along with the encrypted key (e.g. encrypted DEK) to Data Manager 134 via communication channel 215.”; [0146] – “the communication channel 215 may be established and maintained under TLS”; [0171] – “the Data Manager 134 may send each respective instruction to each selected partner via a TLS or VPN connection with a set of public/private key for data encryption”; [0177] – “A partner system 115 in some embodiments may include a partner portal 116.”; [0072] – partner systems 115 communicates with platform 100, comprising Clean Room 300 with Data Manager 134, via a network). Regarding claim 6, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 5. KO further teaches wherein the respective TLS connections are established using a self-signed certificate in the attestation report ([0085]-[0086] – “The signed digital certificate and the attestation token included in the API call enable resource system host 230 to validate that the API call are from expected, trusted originator. In embodiments, API calls placed from API service 220 to resource system host 230 establish a secure communication session, such as a mutual TLS session that may be established via hyper-text transfer protocol (HTTP). In embodiments, API service 220 is configured to utilize the (self-) signed digital certificate generated by enclave engine 208 (e.g., via certificate generator 210) and included in the API call to establish the mutual TLS session with resource system host 230. Trust validator 232 (shown in FIG. 2) of resource system host 230 may be configured to analyze the signed digital certificate of the API call, and determine that a trusted caller has provided the API call, to permit the establishment of the secure communications session.”; [0055] – “a self-signed digital certificate generated by certificate generator 210 includes a secure quote and a public key of the key pair that includes the private key stored in encrypted memory 218.” The data included in the API call (attestation token and self-signed certificate including the public key) is analogous to the claimed attestation report.). It would have been obvious to one of ordinary skill in the art to have modified the TLS connections of ORTIZ to be established using a self-signed certificate in the attestation report as taught by KO. The self-signed certificate allows the recipient of the attestation report to determine the sender, the secure enclave, is trusted before permitting establishment of the TLS connection (KO: [0086]). Further, the methods of KO provide for increased and more robust security for sensitive data (KO: [0031]). Regarding claim 11, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1. ORTIZ further teaches wherein executing the mutually attested code comprises: performing at least one multi-party computation in the data clean room using encrypted data from the two or more partner datasets ([0012]-[0013] – “The received data sets are stored in a protected memory region that is encrypted such that it is inaccessible to an operating system and kernel system. The protected memory region includes at least a data storage region and a data processing subsystem storage region maintaining an isolated data processing subsystem that processes the data to generate output data structures. In an example embodiment, the data processing subsystem applies a processing function that utilizes components of a query request and/or elements of the stored data sets in generating an output. As a simplified example, the data sets can be used for benchmarking and in response to query request about a benchmark statistic, the aggregated data sets can be queried to obtain a response (e.g., utilizing data sets not only from data source A, but also data source B, C, D while maintaining the privacy and security of the underlying data sets as no parties are able to access the protected memory region).”; [0059] – “The secure enclave data processor interfaces with the protected memory region to securely store and encrypt data sets received from a particular data source (e.g., from a partner organization)”; [0064] – “The output data structures can be generated responsive to query data messages, which, in some aspects, can include new information for the system to process, or can be query requests directed to aggregated existing information stored thereon in the protected memory region.”; [0081] – “The data storage 108, can also store output data structures, which can be interacted with through recommendation engine 120, the output data structures storing field values that are generated by processing by a data processing subsystem. In some embodiments, the data processing subsystem of the TEE 103 includes a stored function that is generated based on an aggregate of the data sets received from the corresponding partner computing devices.”; [0083] – “The I/O unit 107 can also receive as a data structure, an instruction set, or a query string, the query data message that triggers the data processing subsystem to generate various output data structures.”; [0089] – “Analytics may be executed on the encrypted data by worker applications.”), wherein a result of the at least one multi-party computation is returned to the data clean room orchestration system ([0164] – “data analysis may be required to complete the data query based on the data results sent from one or more partner portal 116 and/or partner analytics engine 117. Application Manager 1124 may instruct the appropriate number of secure enclaves 133a, 133b, 133n to complete the analysis based on the data results sent from the partners.”; [0165] – “A final data result may be generated by Clean Room 300, and returned to the client system 119 which sent the original data query 118 through the Resource Manager 1100 using a secure communication channel 216, which may be an encrypted channel.”). Regarding claim 13, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1. ORTIZ further teaches wherein the host private key is protected within a sub-system of the host machine ([0139] – “Clear Room 300 may store the corresponding private key to each public key in a keystore 130. Keystore 130 may store a plurality of private keys, each corresponding to a public key that is assigned to a partner.”; [0016] – “The secure enclaves, in some embodiments, may store encryption keys that are used for securely accessing underlying data.”; [0025] – “the key required to decrypt the protected memory region into the computer readable cache memory is stored within the secure enclave data processor and not accessible outside the secure enclave data processor.”; [0073]-[0074] – “processing device 101 includes a secure area known as a trusted execution environment (TEE) 103. TEE 103 may include memory 109 and data storage 108 […] the protected memory region of the TEE 103 (e.g., secure data warehouse 108) is isolated through the use of encryption. In this example, the encryption keys are stored within the TEE 103 itself so that it can access data as required but the underlying data is not accessible by other components, such as an operating system operating on the server or a kernel process.”). Regarding claim 16, ORTIZ teaches An apparatus, comprising: one or more memories storing code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the apparatus to ([0278]-[0280] – “The embodiments of the devices, systems and processes described herein may be implemented in a combination of both hardware and software. These embodiments may be implemented on programmable computers, each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one communication interface. Program code is applied to input data to perform the functions described herein and to generate output information. […] one or more computing devices having at least one processor configured to execute software instructions stored on a computer readable tangible, non-transitory medium. For example, the platform 100 may have a server that includes one or more computers coupled to a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions.”; [0285]-[0286] – “The platform 100 may be implemented as a computing device with at least one processor, a data storage device (including volatile memory or non-volatile memory or other data storage elements or a combination thereof) […] For example, and without limitation, the computing device may be a server”): perform the method of claim 1. Accordingly, claim 16 is rejected as being unpatentable over ORTIZ in view of KO and Auth0 for the same reasons presented with respect to claim 1. Claim 17 recites substantially the same additional limitations as claim 3, applied to the apparatus of claim 16. Accordingly, claim 17 is unpatentable over ORTIZ in view of KO and Auth0 for the same reasons presented with respect to claim 3. Regarding claim 20, ORTIZ teaches A non-transitory computer-readable medium storing code that comprises instructions executable by one or more processors to ([0280] – “one or more computing devices having at least one processor configured to execute software instructions stored on a computer readable tangible, non-transitory medium.”; [0283] – “The technical solution of embodiments may be in the form of a software product instructing physical operations. The software product may be stored in a non-volatile or non-transitory storage medium, which can be a compact disk read-only memory (CD-ROM), a USB flash disk, or a removable hard disk. The software product includes a number of instructions that enable a computer device (personal computer, server, or network device) to execute the processes provided by the embodiments.”): perform the method of claim 1. Accordingly, claim 20 is rejected as being unpatentable over ORTIZ in view of KO and Auth0 for the same reasons presented with respect to claim 1. Claims 2 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ in view of KO and Auth0 as applied to claim 1 above, and further in view of CLEBSCH et al. (U.S. Pub. No. 2019/0163898), hereinafter CLEBSCH. Regarding claim 2, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1, but fails to teach wherein at least one partner of the data clean room has an attestation policy that prohibits decryption within the trusted execution environment. However, CLEBSCH teaches wherein at least one partner of the data clean room has an attestation policy that prohibits decryption within the trusted execution environment ([0073] – “Remote attestation issues evidence that a relying party (e.g. remote user/customer/tenant or an attestation platform/service) can use to verify that a container has been created and configured as expected. In some examples, the relying parties will use the issued evidence as a basis for provisioning encryption keys.”; [0078]-[0079] – “tenant 302 generates a secure key release policy ψ which can be used for releasing the symmetric key. In some examples, secure key release policy ψ can be used for releasing a symmetric key to a Trusted Execution Environment (TEE) that has been configured to a state that meets the requirements of the secure key release policy ψ.[….] In some examples, secure key release policy ψ may be provided by a user according to their own requirements. At 338, tenant 302 provisions the symmetric key sk.sup.t to secure key store 328. The symmetric key sk.sup.t may be provisioned to secure key store 328 along with the secure key release policy ψ. Secure key store 328 may not release symmetric key sk.sup.t unless all the one or more requirements of the key release policy are met”; [0109]-[0111] – “attestation platform provides the token to service logic 116a. At 572, service logic 116a sends the token to secure key store 328. Secure key store 328 may previously have stored a symmetric key for encrypting a user's data filesystems. The symmetric key is bound to a secure key release policy ψ, as discussed above with respect to FIG. 3. Due to the attestation report verification performed by attestation platform 560 above, secure key store 328 can trust that the token is valid. At 574, “secure key store 328 determines if the claims of the token satisfy the requirements of the secure key release policy. If so, at 576 secure key store 328 may release the symmetric key for decrypting the user's filesystems to service logic 116a. If the claims of the token do not satisfy the requirements of the secure key release policy, the secure key store may determine to not release the symmetric key.” A tenant’s policy does not release a key for decrypting the tenant’s data (i.e., it does not allow, or prohibits, decryption) if the trusted execution environment/claims of the attestation token provided does not meet the requirements of the policy.). CLEBSCH is considered to be analogous art to the claimed invention because it is in the same field of confidential computing in the cloud, and is reasonably pertinent to the problem faced by the inventor of performing remote attestation between the TEE and data clean room partners. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of ORTIZ in view of KO and Auth0, such that one of the partners of the data clean room taught by ORTIZ has an attestation policy that prohibits decryption in the secure execution environment as taught by CLEBSCH. Doing so ensures secret data can only be decrypted in trusted execution environments that have been created and configured as expected, and cannot be decrypted if the trusted execution environment is not configured as required by the policy (CLEBSCH: [0073], [0078]-[0079], and [0082]). Regarding claim 14, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1, but fails to teach wherein at least one of the respective secret keys is released from a key management system in accordance with a key release policy associated with at least one partner of the data clean room. However, CLEBSCH teaches wherein at least one of the respective secret keys is released from a key management system in accordance with a key release policy associated with at least one partner of the data clean room ([0073] – “Remote attestation issues evidence that a relying party (e.g. remote user/customer/tenant or an attestation platform/service) can use to verify that a container has been created and configured as expected. In some examples, the relying parties will use the issued evidence as a basis for provisioning encryption keys.”; [0078]-[0079] – “tenant 302 generates a secure key release policy ψ which can be used for releasing the symmetric key. In some examples, secure key release policy ψ can be used for releasing a symmetric key to a Trusted Execution Environment (TEE) that has been configured to a state that meets the requirements of the secure key release policy ψ.[….] In some examples, secure key release policy ψ may be provided by a user according to their own requirements. At 338, tenant 302 provisions the symmetric key sk.sup.t to secure key store 328. The symmetric key sk.sup.t may be provisioned to secure key store 328 along with the secure key release policy ψ. Secure key store 328 may not release symmetric key sk.sup.t unless all the one or more requirements of the key release policy are met”; [0109]-[0111] – “attestation platform provides the token to service logic 116a. At 572, service logic 116a sends the token to secure key store 328. Secure key store 328 may previously have stored a symmetric key for encrypting a user's data filesystems. The symmetric key is bound to a secure key release policy ψ, as discussed above with respect to FIG. 3. Due to the attestation report verification performed by attestation platform 560 above, secure key store 328 can trust that the token is valid. At 574, “secure key store 328 determines if the claims of the token satisfy the requirements of the secure key release policy. If so, at 576 secure key store 328 may release the symmetric key for decrypting the user's filesystems to service logic 116a. If the claims of the token do not satisfy the requirements of the secure key release policy, the secure key store may determine to not release the symmetric key.”). CLEBSCH is considered to be analogous art to the claimed invention because it is in the same field of confidential computing in the cloud, and is reasonably pertinent to the problem faced by the inventor of performing remote attestation between the TEE and data clean room partners. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of ORTIZ in view of KO and Auth0, such that one of the partners of the data clean room taught by ORTIZ has an key release policy which a key store uses to release keys for decrypting the data (the secret keys) to the TEE as taught by CLEBSCH. Doing so ensures the trusted execution environment has been created and configured according to requirements of the user (“partner”), and ensures that the TEE is not able to decrypt secret data unless it meets the requirements required by the policy (CLEBSCH: [0073], [0078]-[0079], and [0082]). Claims 4 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ in view of KO and Auth0 as applied to claims 3 and 16 above, and further in view of de Boer (U.S. Pub. No. 2020/0076794). Regarding claim 4, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 3. KO teaches the method further comprising: receiving, from the attestation endpoint ([0034] – “external computing system 102, computing system 104, token provider 110, and resource system may be enabled to communicate with each other over a network 114”; [0036] – “Token provider 110 (as an attestation service)”. Computing system 104 communicates with token provider 110 over a network, thus there necessarily must be endpoints, i.e., devices that connect to and exchange information with the network, at the token provider.), an attestation response comprising the encrypted token which [is based on] the self-signed certificate and [includes] information for token verification ([0036] – “Token provider 110 (as an attestation service) may generate and provide the attestation token when requested by computing system 104 via its enclave, and may also provide signing verification certificates”; [0071] – “token provider 224 may fulfill the token request sent from API service 220 by generating, e.g., via token generator 226 and based at least on the (self-)signed digital certificate and the secure quote, and providing an attestation token to API service 220 in step 410”; [0049] – “Token provider 224, which may be an embodiment of token provider 110 described for FIGS. 1A and 1B above, and which may be an attestation service implemented as any type of server or computing device, as mentioned elsewhere herein, or as otherwise known, is configured to generate tokens such as attestation tokens, signed with a signing certificate of signing certificates 228, via a token generator 226, and may store one or more signing certificates 228 as described herein. […] An attestation token generated by token generator 226 and signed with a signing certificate thereof may be required for validating access requests” Signature = information for token verification included in the token). It would have been obvious to one of ordinary skill to have modified the remote attestation between the partner systems and the platform implementing the trusted execution environment taught by ORTIZ to incorporate the attestation methods of KO. The methods of KO provides for increased and more robust security for sensitive data (KO: [0031]), and specifically using the signed attestation token obtained from a token provider (“attestation service”) based on the self-signed certificate enables the recipient to validate the secure enclave as a trusted environment with the trusted token provider who issued the token (KO: [0003] and [0040]). For the same reasons presented with respect to claim 1, it would have been obvious to have modified the token taught by KO to be an encrypted token as taught by Auth0. The combination of ORTIZ in view of KO and Auth0 fails to expressly teach the encrypted token includes the self-signed certificate. However, de Boer teaches a token includes a certificate ([0020]-[0021] – “the TLS handshake includes transmission of a X.509 certificate from TLS client 310 to reverse proxy 320 and transmission of a X.509 certificate from reverse proxy 320 to TLS client 310. Next, reverse proxy 320 creates an authentication token based on the X.509 certificate received from TLS client 310. For example, reverse proxy 320 may extract information from the certificate and wrap the information into a JSON Web Token (JWT) token using a local key, resulting in a signed JWT token. In some embodiments the JWT token includes the entire certificate.”). de Boer is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of establishing secure communications between the trusted execution environment and the data clean room partners. Both KO and de Boer teach receiving a certificate used in generating a TLS connection and subsequently generating a token “based on” the received certificate. Therefore, it would have been obvious to one of ordinary skill in the art that generating the token “based on” the self-signed certificate as taught by KO encompasses including the entire self-signed certificate in the token as evidenced by de Boer. Further, the methods taught by de Boer overcome incompatibilities between certificate-based mutual authentication and certificate based mutual authentication (see [0003] and [0013]). Claim 18 recites substantially the same additional limitations as claim 4, applied to the apparatus of claim 16. Accordingly, claim 18 is unpatentable over ORTIZ in view of KO and Auth0, and further in view of de Boer for the same reasons presented with respect to claim 4. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ in view of KO and Auth0 as applied to claim 1 above, and further in view of Jones et al. (NPL Document: “OAuth 2.0 Authorization Server Metadata”), hereinafter Jones. Regarding claim 7, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1. KO further teaches wherein the encrypted token comprises a signature that is verifiable using a set of token signing keys provisioned by a […] endpoint of an attestation service ([0049] – “Token provider 224, which may be an embodiment of token provider 110 described for FIGS. 1A and 1B above, and which may be an attestation service implemented as any type of server or computing device, as mentioned elsewhere herein, or as otherwise known, is configured to generate tokens such as attestation tokens, signed with a signing certificate of signing certificates 228, via a token generator 226, and may store one or more signing certificates 228 as described herein. […] An attestation token generated by token generator 226 and signed with a signing certificate thereof may be required for validating access requests to secure data in a secure data storage, to computational services, and/or the like, in embodiments. Signing certificates 228 may comprise signed digital certificates generated by token provider 224, which may rotate (i.e., be replaced or updated) periodically for improving security thereof.”; [0075] – “signs the attestation token with a signing certificate of signing certificates 228 (which may be shared with resource system host 230,”; [0077] – “resource system host 230 is configured to request signing certificates from token provider 224. […] token provider 224 provides the requested signing certificates to resource system host 230 for later validation of API calls to access data persisted at resource system host 230.”; [0089] – “Trust validator 232 validates the attestation token signature, which is generated by token provider 224 with the attestation token, against a corresponding signing certificate of signing certificates 234 (shown in FIG. 2 and described above) and determines if the URI of the attestation token matches that known to be associated with token provider 224,”; [0055] and [0069] – digital certificates, such as the token signing certificates provisioned by the token provider, include public keys; [0034] – “external computing system 102, computing system 104, token provider 110, and resource system may be enabled to communicate with each other over a network 114”; [0036] – “Token provider 110 (as an attestation service)”. Resource system communicates with token provider 110 over a network, thus there necessarily must be endpoints, i.e., devices that connect to and exchange information with the network, at the token provider). It would have been obvious to one of ordinary skill to have modified the remote attestation between the partner systems and the platform implementing the trusted execution environment taught by ORTIZ to incorporate the attestation methods of KO. The methods of KO provides for increased and more robust security for sensitive data (KO: [0031]), and specifically using the signed attestation token obtained from a token provider (“attestation service”) based on the self-signed certificate enables the recipient to validate the secure enclave as a trusted environment with the trusted token provider who issued the token (KO: [0003] and [0040]). For the same reasons presented with respect to claim 1, it would have been obvious to have modified the token taught by KO to be an encrypted token, such as an encrypted JWT, as taught by Auth0. The combination of ORTIZ in view of KO and Auth0 fails to expressly teach the token signing keys provisioned by a metadata endpoint. However, Jones teaches signing keys provisioned by a metadata endpoint (Page 4 – “Authorization servers can have metadata describing their configuration. The following authorization server metadata values are used by this specification […] jwks_uri OPTIONAL URL of the authorization server’s JWK Set [JWK] document. The referenced document contains the signing key(s) the client uses to validate signatures from the authorization server. This URL MUST use the "https" scheme. The JWK Set MAY also contain the server’s encryption key or keys, which are used by clients to encrypt requests to the server. When both signing and encryption keys are made available, a "use" (public key use) parameter value is REQUIRED for all keys in the referenced JWK Set to indicate each key’s intended usage.”). Jones is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of verifying the trustworthiness of the platform where applications are deployed. Therefore, it would have been obvious to one of ordinary skill in the art that the endpoint of the attestation service which provisioned the token signing certificates and the corresponding keys, could be a metadata endpoint as taught by Jones. Providing a metadata endpoint for a server implementing a service, e.g. the attestation service of KO, allows clients of the server to obtain information needed to interact with the server, such as the signing keys being used by the server which allow the client to validate signatures from the server (Jones: Pages 1 and 4). Claim 8-9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ in view of KO and Auth0 as applied to claims 1 and 16 above, and further in view of Modica et al. (U.S. Pub. No. 2022/0255731), hereinafter Modica. Regarding claim 8, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1. ORTIZ teaches the method further comprising: [storing to a] location configured by the data clean room orchestration system, output data that results from executing the mutually attested code on the two or more partner datasets in the trusted execution environment ([0081] – “The data storage 108, can also store output data structures, which can be interacted with through recommendation engine 120, the output data structures storing field values that are generated by processing by a data processing subsystem. In some embodiments, the data processing subsystem of the TEE 103 includes a stored function that is generated based on an aggregate of the data sets received from the corresponding partner computing devices.”; [0086] – “The user data can only be decrypted and processed in Secure Enclaves 133. Data output generated by processes within Secure Enclaves 133 are then encrypted and stored within the secure data warehouse 108.”; [0088] – “A secure enclave 133 can also be implemented to execute analytics on the decrypted data”; [0089] – “Analytics may be executed on the encrypted data by worker applications. The worker application may decrypt the data using an appropriate decryption key prior to executing said analytics. Once analytics are done, output data may be generated. In some cases, output data may be encrypted.”; [0198]-[0199] – “After being notified of all workers having finished, it combines their partial results.”; [0215] – “If partial results are to be aggregated in the application master, the results may be stored in a secure enclave.”; [0150] – the data clean room orchestration system (e.g. comprising the “resource manager”) generates, i.e., configures, the secure enclaves; [0151] – analytic operations performed in the Clean Room may be signed by all parties participating in the Clean Room, i.e., the analytic operations are “mutually attested” code). The combination of ORTIZ in view of KO and Auth0 fails to expressly teach writing, to a shared storage location the output data. However, Modica teaches writing, to a shared storage location the output data ([0037] – “a "distributed object store service (OSS)" means an object store for storing private input data from a plurality of clients and supporting a secret share scheme determined by the MPC protocol used by the distributed multiparty computation service. Optionally, output data is also stored in the OSS. […] The OSS supports the secret share scheme determined by the MPC protocol, […] the OSS applies additive secret sharing to securely store data across a plurality of VCPs”; [0038] – “the OSS and its implementing engines (OSEs) executed by each VCP”; [0040] – “the object store service exposes an API that allows, in the context of a "secret sharing" SS protocol, the first or the second or completely different clients to fetch the secret shares and recombine them to reconstruct the secret.”; [0278] – “each secret share provides an encrypted version of a portion of each value to each computing engine VCP1, VCP2.” [0084] – “storing 212 at least one result of the secure multiparty computation via the distributed object store service, and/or outputting the at least one result to at least the first or second client.”; [0105] – “at the conclusion of a computation, MPC1 and MPC2 write their respective results to respective object store engines OSEI and OSE2. In an embodiment, at the conclusion of a computation, identifiers of OSE objects that contain the results of the computation are communicated to one or more clients that triggered the multiparty computation. In an embodiment, at the conclusion of a computation, the results of the computation are communicated to one or more clients that triggered the multiparty computation. Subsequently, the private result of the computation is communicated to the respective clients C1 and C2.”; [0125] – a container orchestration system is used to implement (“configure”) the distributed OSS). Modica is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of securely performing a multi-party computation. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the method of ORTIZ to incorporate the teachings of Modica such that the output data is written to a shared storage location (i.e., a storage location which the multiple parties involved in the computation can access, and is therefore “shared” by the multiple parties). The methods of storing the output data of a multi-party computation taught by Modica provides secure and scalable storage of data, as multiple computing engines can be used to scale the distributed object storage service, and distributing the secret shares of the data across the multiple VCPs provides security in storing the data (Modica: [0012]-[0013], [0037]-[0038] and [0040]). Regarding claim 9, the combination of ORTIZ in view of KO and Auth0, and further in view of Modica teaches The method of claim 8. Modica further teaches wherein the shared storage location containing the output data is accessible to the two or more partners of the data clean room ([0040] – “the object store service exposes an API that allows, in the context of a "secret sharing" SS protocol, the first or the second or completely different clients to fetch the secret shares and recombine them to reconstruct the secret.”; [0105]-[0107] – “A first client VCC1 may store or read private data from a first object store engine OSE1. […], identifiers of OSE objects that contain the results of the computation are communicated to one or more clients that triggered the multiparty computation. In an embodiment, at the conclusion of a computation, the results of the computation are communicated to one or more clients that triggered the multiparty computation. Subsequently, the private result of the computation is communicated to the respective clients C1 and C2.; [0116] – “the object store service exposes a REST API that can be consumed by clients of the distributed data processing service 30 over HTTPS.”). It would have been obvious to one of ordinary skill in the art to have modified the method of ORTIZ to incorporate the teachings of Modica such that the output data is written to a shared storage location (i.e., a storage location which the multiple parties involved in the computation can access, and is therefore “shared” by the multiple parties). The methods of storing the output data of a multi-party computation taught by Modica provides secure and scalable storage of data, as multiple computing engines can be used to scale the distributed object storage service, and distributing the secret shares of the data across the multiple VCPs provides security in storing the data (Modica: [0012]-[0013], [0037]-[0038] and [0040]). Claim 19 recites substantially the same additional limitations as claim 8, applied to the apparatus of claim 16. Accordingly, claim 19 is unpatentable over ORTIZ in view of KO and Auth0, and further in view of Modica for the same reasons presented with respect to claim 8. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ in view of KO, Auth0, and Modica as applied to claim 8 above, and further in view of Hopper (U.S. Pub. No. 2024/0370306). Regarding claim 10, the combination of ORTIZ in view of KO and Auth0, and further in view of Modica teaches The method of claim 8. ORTIZ further teaches wherein the output data is returned to the data clean room orchestration system ([0164] – “one or more worker nodes may be used to perform the required data analytics. A final data result may be generated by Clean Room 300, and returned to the client system 119 which sent the original data query 118 through the Resource Manager 1100”; [0195] – “Worker enclaves may be embedded within a cluster managed by a resource management application”; [0198] – “An application master (or "master application") negotiates for resources, requests worker containers spawned and tracks their job progress. The application master requests a resource manager (see e.g. FIG. 11A) to spawn worker containers to perform a chunk of the analytics. It also sends the entire input file and directions as to what portion of the file to process. After being notified of all workers having finished, it combines their partial results.” Results are provided from worker containers back to the resource manager/application master (the “data clean room orchestration system”.). The combination of ORTIZ in view of KO, Auth0 and Modica fail to expressly teach using a private Internet Protocol (IP) address. However, Hopper teaches using a private Internet Protocol (IP) address ([0042] – “the container orchestration system to be accessible only to a private IP address, to improve security to corresponding containerized applications or for internal communication within the cluster.”). Hopper is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of implementing the data clean room using cloud computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the cluster comprising a resource management application (“data clean room orchestration system”) and a plurality of worker nodes implementing the secure enclaves and managed by the resource manager taught by ORTIZ (see ORTIZ: [0195], [0201]) such that communication in the cluster uses a private IP address as taught by Hopper. Using a private IP address improves security to the applications implemented in the cluster and for communication within the cluster (Hopper: [0042]). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ in view of KO and Auth0 as applied to claim 1 above, and further in view of Wennerström et al. (U.S. Pub. No. 2022/0158926), hereinafter Wennerström. Regarding claim 12, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1. ORTIZ further teaches wherein obtaining the two or more partner datasets comprises: reading a partner dataset from an encrypted data source configured by a partner of the data clean room ([0059] – “data sets received from a particular data source (e.g., from a partner organization) that may, in some embodiments, be encrypted with a key specific to the partner organization or data source.”; [0137] – “encrypted data may be transmitted from partner system 115 to platform 100”; [0141] – “The enclaves 133a, 133b, 133n may be referred to as destination enclaves as each enclave may be selected by Data Manager 134 to be a destination of encrypted data from partner portal 116. A file system such as Hadoop File System (HDFS) may be included in Clean Room to manage the encrypted data stored by the enclaves 133a, 133b, 133n.”; [0209] – “client system 119 (e.g. a partner system 115”; [0210] – “The containers 132a, 132b, 132c then read the encoded data from the database (e.g., HDFS)”. Partner system 115 is the “encrypted data source configured by a partner of the data clean room”, i.e., source of encrypted data that is obvious configured by the partner to whom the system belongs.); and transferring the partner dataset to an […] data container accessible to the one or more VMs in the trusted execution environment ([0198] – “The application master requests a resource manager (see e.g. FIG. 11A) to spawn worker containers to perform a chunk of the analytics.”; [0209] – “resource manager 1100 and spawns worker nodes 132a, 132b within separate containers 1130, 1150 to perform tasks.”; [0210] – “client system 119 may submit a job to the Yarn Cluster, creating a container for the Application Master 1123. Next, the Application Master 1123 spawns several worker nodes or containers 132a, 132b, 132c, each with an enclave. The containers 132a, 132b, 132c then read the encoded data from the database (e.g., HDFS) and send the data to the enclave. Within the enclave, the data is decoded, processed, re-encoded, and then returned to the external container.”; [0218] – “A YARN container, which may act as a worker application or worker daeman, may coordinate resource allocation on one machine. Each YARN container may include an executor 1127a, 1127b, which can execute Spark tasks or applications. Generally speaking, an executor may be an implemented process launched for an application on a worker node or a YARN container, that runs tasks and keeps data in memory or disk storage across them. Each application may have its own executors.”; [0229] and [0232] – as clean room 300 receives encrypted data, it is transmitted to executors (e.g., within a YARN container). As shown in claim 1, executors comprise the VMs.). The combination of ORTIZ in view of KO and Auth0 fails to expressly teach the container is an ephemeral data container. However Wennerström teaches an ephemeral data container ([0008] – “With containers' inherently lightweight nature, a single host can often support many more container instances than traditional virtual machines (VMs). These systems are characterized by being dynamic and ephemeral, as hosted services can be quickly scaled up or adapted to new requirements. Often short-lived, containers can be created and moved more efficiently than VMs, and they can also be managed as groups of logically-related elements (sometimes referred to as “pods” for some orchestration platforms, e.g., Kubernetes). These container characteristics impact the requirements for container networking solutions: the network should be agile and scalable. VMs, containers, and bare metal servers may need to coexist in the same computing environment, with communication enabled among the diverse deployments of applications.”; [0218] – “Ephemeral Containers are temporary containers that can be added side-by-side to other containers in a PodK”). Wennerström is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of configuring and orchestrating the clean room implemented in the cloud. Therefore, it would have been obvious to one of ordinary skill in the art that the containers of ORTIZ may be ephemeral containers, as containerized systems as a whole are typically characterized as being “dynamic and ephemeral” (e.g., the life cycle of containers are typically short) as taught by Wennerström in order for the systems to be scalable and reconfigurable (Wennerström: [0007]-[0008]). Further, using ephemeral containers in a pod with other containers that host the application, allows for keeping the application containers more minimal and reduces the attack surface of the application containers (Wennerström: [0218]). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over ORTIZ in view of KO and Auth0 as applied to claim 1 above, and further in view of Cela et al. (U.S. Pub. No. 2024/0291650), hereinafter Cela. Regarding claim 15, the combination of ORTIZ in view of KO and Auth0 teaches The method of claim 1, but fails to teach wherein the trusted execution environment for the data clean room is configured via a control plane of the data clean room orchestration system. However, Cela teaches wherein the trusted execution environment for the data clean room is configured via a control plane of the data clean room orchestration system ([0016] – “the service performing the computation (i.e., processing an event or request using business logic) is split between a data plane (DP) and a secure control plane (SCP). The business logic specific for the computation is hosted within the DP, where the DP is within a TEE, also referred to herein as an enclave. The business logic may be provided to the DP as a container, where a container is a software package containing all of the necessary elements to run the business logic in any environment. The container may, for example, be provided to the SCP by the business logic owner. Functionally, the SCP provides a secure execution environment and facilities to deploy and operate the DP at scale, including managing cryptographic keys, buffering requests, keeping track of the privacy budget, accessing storage, orchestrating a policy-based horizontal autoscaling, and more. The SCP execution environment isolates the DP from the specifics of the cloud environment, allowing for the service to be deployed on any supported cloud vendor without changes on the DP.”; [0026] – “The cloud platform 122 includes the SCP 126, which includes a TEE 124. The TEE 124 is a secure execution environment where the DP 128 is isolated”; [0027] – “One or more servers of the cloud platform 122 perform control plane (CP) functions (i.e., to support the SCP 126), and one or more servers perform data plane (DP) functions. For example, CP functions including key management and privacy budgeting services can be distributed across more than one Trusted Party. All functions of the DP 128 are carried out by processes within the TEE 124. Depending on the implementation, there may be more than one TEE per DP server. The TEE 124 may be deployed and operated by an administrator. The administrator can audit the logic to be implemented on the DP 128 and verify against a hash of the binary image to deploy the logic 142. On the CP, there may be a front end server or process 134 that receives external requests/event indications (e.g., from the client device 102), buffers requests/events until they can be processed by the DP 128, and forwards received requests to the DP 128.”; [0028] – “The business logic 142 is for implementing whichever application or service is being deployed on the TEE 124. The memory 140 also may store a key cache 146, which stores cryptographic keys for encrypting and decrypting communications. Further, the memory 140 includes a CPIO API 144, which includes a library of functions for communicating with other elements of the cloud platform 122, including components on the CP of the SCP 126. The CPIO API 144 can be configured to interface with any cloud platform provided by cloud provider. For example, in a first deployment, the SCP 126 may be deployed to a first cloud platform provided by a first cloud provider. The DP 128 hosts the particular business logic 142, and the CPIO API 144 facilitates communications between the logic 142 and the first cloud platform.”). Cela is considered to be analogous art to the claimed invention because it is in the same field of confidential computing in the cloud. Therefore, it would have been obvious to have modified the teachings of ORTIZ in view of KO and Auth0 to incorporate the teachings of Cela. Using a secure control plane in which the trusted execution environment is implemented and configured provides many different privacy, trust and security guarantees, and enables the same business logic (i.e., application code) to be deployed in different cloud providers (see [0018] and [0062]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Resch et al. (U.S. Pub. No. 2019/0297064) teaches a “wrapped key” is a key that is encrypted, and decrypting a wrapped key is unwrapping the key (see [0178] and [0275]). Cloudflare (NPL Document V: “What is an endpoint”) teaches an endpoint is any device that connects to a computer network, and differentiates between and “API endpoint” and an “endpoint” (see pages 1-4). Microsoft (NPL Document W: “What is an endpoint?”) teaches “endpoints are physical devices that connect to and exchange information with a computer network” such as mobile devices, desktop computers, servers, etc. (see pages 1-4). Kostakis et al. (U.S. Patent No. 11,651,287) teaches a data sharing system, where two database accounts share data in a data clean room, in which contents of an approved statements data comprises query statements approved by both database accounts for running on their respective data (see Col. 11, line 62- Col. 13, line 30). Langseth et al. (U.S. Pub. No. 2023/0004669) teaches a data clean room allowing encryption based data analysis across multiple accounts of database users, where the users’ data is encrypted using a key (see Abstract). Johnson et al. (NPL Document: “Intel Software Guard Extensions: EPID Provisioning and Attestation Services”) teaches remote attestation should be performed during the set-up phase of a secure enclave, in which the service provider and the secure enclave will agree on an authentication token (e.g., a private/public key pair generated by the enclave), which is then encrypted by the enclave using a Seal Key (see sections 1.2 and 1.3). It further teaches the signing key certificates container the public signature verification key that is pair with the Intel Signing Key, used to authenticate objects (see section 4.2). Wei et al. (U.S. Pub. No. 2020/0167503) teaches a regulating part of a smart contract prohibits decryption on a regulated smart contract within a TEE by prohibiting storage or extraction of a decryption key corresponding the encrypted regulated smart contract (see paragraphs [0057], [0097]-[0098], [0101]-[0102], [0112], and [0115]). Van Cleve et al. (U.S. Pub. No. 2023/0308277) teaches a server using multiple private signing keys to sign an attestation token, which can be verified by a recipient of the attestation token using corresponding public keys made available by the server (see [0096]). Gaddam et al. (U.S. Patent No. 11,921,884) teaches writing output data to an output database multiple times (once for each data consumer) using different public keys for each data consumer, which allows each of the multiple data consumers to access the encrypted output data in the output database, e.g. a shared ledger (see Col. 9, line 61-Col. 10, line 6). Pappachan et al. (U.S. Pub. No. 2022/0103516) teaches cloud service providers enable users to set up Virtual Cloud Networks that can be controlled while using shared public infrastructure, e.g., by allowing users to assign private IP addresses (see [0001]). Renke et al. (U.S. Pub. No. 2021/0011984) teaches the lifecycles of containers are ephemeral (see [0015] and [0072]). Chhabra et al. (U.S. Patent No. 12,010,227) teaches an instance metadata service can be a service that stores encrypted credentials, including a secret key used to sign requests and an encrypted token, as metadata, and a VM instance can submit a call or request to obtain the encrypted credentials (see Col. 2, lines 39-61). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER MARIE GUTMAN whose telephone number is (703)756-1572. The examiner can normally be reached M-F: 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Young can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNIFER MARIE GUTMAN/Examiner, Art Unit 2194 /KEVIN L YOUNG/Supervisory Patent Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Aug 15, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12511179
MANAGING APPLICATION PROGRAMMING INTERFACES (APIs) OF A WEB APPLICATION
2y 5m to grant Granted Dec 30, 2025
Patent 12495084
REALTIME DISTRIBUTION OF GRANULAR DATA STREAMS ON A NETWORK
2y 5m to grant Granted Dec 09, 2025
Patent 12461798
MANAGING PERFORMANCE DURING COLLABORATION SESSIONS IN HETEROGENOUS COMPUTING PLATFORMS
2y 5m to grant Granted Nov 04, 2025
Patent 12450109
QUEUEING ASYNCHRONOUS EVENTS FOR ACCEPTANCE BY THREADS EXECUTING IN A BARREL PROCESSOR
2y 5m to grant Granted Oct 21, 2025
Patent 12444002
MULTISIDED AGNOSTIC INTEGRATION SYSTEM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+50.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month