Prosecution Insights
Last updated: April 19, 2026
Application No. 18/814,232

DATA PROCESSING METHOD, DIRECT MEMORY ACCESS ENGINE, AND COMPUTING DEVICE

Non-Final OA §103§112
Filed
Aug 23, 2024
Examiner
LOUIE, HOWARD H
Art Unit
2494
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
149 granted / 181 resolved
+24.3% vs TC avg
Strong +60% interview lift
Without
With
+59.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
17 currently pending
Career history
198
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
22.4%
-17.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 181 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is in reply to papers filed on 10/24/2025. Claims 1-20 are pending. Claims 1, 11, and 14 is/are independent. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 12/20/2024, 6/4/2025, 7/15/2025, and 10/24/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8-10 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 8 recites “which runs a TA” However, claim 2 introduces a trusted application and it is unclear if claim 8 is introducing a new trusted application or referring to the previous trusted application. Claim 10 recites “further comprises a CA” However, claim 2 introduces a client application and it is unclear if claim 10 is introducing a new client application or referring to the previous client application. Claim 9 depends from claim 8 and is rejected for the same reasons as claim 8. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 and 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kida et al. U.S. Publication 20220116403 (hereinafter “Kida”) in view of Mencias et al. U.S. Publication 20230224156 (hereinafter “Mencias”). As per claim 1, Kida discloses A data processing method comprising: [para. 38 describes transferring data using the DMA engine; para. 46 describes a data transfer process with the accelerator and TEE ] obtaining, by a direct memory access (DMA) engine [DMA engine 320, para. 38; DMA controller, para. 98] of a first computing device, [system 100 such as used in server system, para. 33; computing device 800, para. 127; computing device 200, figure 2] encrypted data [copying encrypted data from the host memory buffer, para. 38] that is to be processed [transfer data between the host memory buffer and the accelerator 136 buffer, para. 38; forwarding the plaintext data to the accelerator 136 buffer, para. 38; that is to be processed is intended use; the transferring from non-TEE memory can disclose to be processed in a rich execution environment] in a rich execution environment (REE) [environment of the computing device (e.g., element 200 figure 2) hosting the untrusted operating system can be considered a REE] of the first computing device, wherein the first computing device comprises the REE running a general operating system [operating system, para. 20, 45; untrusted software .. operating system, para. 17 ] and a trusted execution environment (TEE) [ TEE, para. 17] [0017] Referring now to FIG. 1, a computing device 100 for secure I/O with an accelerator device includes a processor 120 and an accelerator device 136, such as a field-programmable gate array (FPGA). In use, as described further below, a trusted execution environment (TEE) established by the processor 120 securely communicates data with the accelerator 136. Data may be transferred using memory-mapped I/O (MMIO) transactions or direct memory access (DMA) transactions. For example, the TEE may perform an MMIO write transaction that includes encrypted data, and the accelerator 136 decrypts the data and performs the write. As another example, the TEE may perform an MMIO read request transaction, and the accelerator 136 may read the requested data, encrypt the data, and perform an MMIO read response transaction that includes the encrypted data. As yet another example, the TEE may configure the accelerator 136 to perform a DMA operation, and the accelerator 136 performs a memory transfer, performs a cryptographic operation (i.e., encryption or decryption), and forwards the result. As described further below, the TEE and the accelerator 136 generate authentication tags (ATs) for the transferred data and may use those ATs to validate the transactions. The computing device 100 may thus keep untrusted software of the computing device 100, such as the operating system or virtual machine monitor, outside of the trusted code base (TCB) of the TEE and the accelerator 136. Thus, the computing device 100 may secure data exchanged or otherwise processed by a TEE and an accelerator 136 from an owner of the computing device 100 (e.g., a cloud service provider) or other tenants of the computing device 100. Accordingly, the computing device 100 may improve security and performance for multi-tenant environments by allowing secure use of accelerator devices. performing, by the DMA engine, an operation of migrating the encrypted data [encrypted data, para. 38] to the TEE; and [the TEE may configure the accelerator 236 to perform a DMA operation, and the accelerator 236 performs a memory transfer, para. 17; The DMA engine 320 is configured to transfer data between the host memory buffer and the accelerator 136 buffer in response to the descriptor from the TEE 302… transferring the data includes copying encrypted data from the host memory buffer and forwarding the plaintext data to the accelerator 136 buffer in response to decrypting the encrypted data, para 38] performing, by the DMA engine, a decryption operation on the encrypted data during the operation of migrating the encrypted data to the TEE, to obtain decrypted data. [in response to decrypting the encrypted data, para 38] [according to Kida para. 17, the TEE may configure the accelerator to perform a DMA operation which involves memory transfer and encryption/decryption. To implement this, according to Kida para. 38, the DMA engine copies encrypted data from the host memory buffer by decrypting the data and forwarding the plaintext data to accelerator buffer. The data being transferred from memory will end up in the TEE via the accelerator buffer.] Kida [0037] The AT controller 318 is configured to initialize an AT in response to the initialization command from the TEE 302. The AT controller 318 is further configured to finalize the AT in response to the finalization command from the TEE 302. [0038] The DMA engine 320 is configured to transfer data between the host memory buffer and the accelerator 136 buffer in response to the descriptor from the TEE 302. For a transfer from host to accelerator 136, transferring the data includes copying encrypted data from the host memory buffer and forwarding the plaintext data to the accelerator 136 buffer in response to decrypting the encrypted data. For a transfer from accelerator 136 to host, transferring the data includes copying plaintext data from the accelerator 136 buffer and forwarding encrypted data to the host memory buffer in response encrypting the plaintext data. However, Kida does not expressly disclose and a trusted execution environment (TEE) running a trusted operating system; Mencias discloses and a trusted execution environment (TEE) running a trusted operating system; [0016] The TEE 150 is an area on the main processor 120 of a computing device that has been separated from the REE 102. The TEE 150 includes a trusted OS 152 that runs parallel to the rich OS 104. The TEE 150 enables data to be stored, processed, and maintained so as to not allow data from outside the TEE 150 to interfere with the data inside the TEE 150. The trusted OS 152 executes a single virtual server 154, via a set of trusted APIs 156. The TEE 150 further includes a trusted bootloader 158 that receives a set of encryption keys for encrypting a data volume of an independent memory device. This process is described in greater detail with reference to FIG. 2. It should be appreciated that some or all of the functionality described herein can be performed on a computer system, for example, the computer system 400 shown in FIG. 4. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Kida with the technique for a TEE running a trusted operating system of Mencias to include and a trusted execution environment (TEE) running a trusted operating system; One of ordinary skill in the art would have made this modification to improve the ability of the system to include a TEE running a trusted operating system. The system of the primary reference can be modified so that the TEE executes a trusted operating system. The trusted operating system can be trusted to execute applications within the TEE, to facilitate sensitive computations within the TEE. As per claim 11, the claim(s) is/are directed to a direct memory access engine with limitations which correspond to limitations of claim 1, and is/are rejected for the reasons detailed with respect to claim 1. Claim 11 also recites, and Kida discloses A direct memory access (DMA) engine comprising: [DMA engine 320, para. 38; DMA controller, para. 98] a processor; and [the DMA may be included in the processor described at para. 50, this means the DMA includes some portion of the processor, and the processor performs the functions; also the DMA performs functions and so is disclosing a processor] a power supply circuit, configured to supply power to the processor; [power supply, para. 45; power state of processor cores 402A-402N para. 60] wherein the processor is configured to:[DMA engine can perform operations, para. 54] para. 50 DMA engine 226 and/or the MMIO engine 228 may be included in other components of the computing device 200 (e.g., the processor 220, memory controller, or system agent), or in some embodiments may be embodied as separate components. As per claim 12, Kida discloses wherein the DMA engine is integrated into a processor of the computing device; and [See para. 21, DMA engine may be included in processor; DMA engine circuitry 320 may form a portion of the processor 120, para. 28] wherein the DMA engine, the processor of the computing device, a network interface card of the computing device, a storage device of the computing device, and a memory of the computing device are connected through a bus, and the bus comprises at least one of a peripheral component interconnect express (PCIe) bus, a compute express link (CXL) bus, or a unified bus (UB).[See Kida figure 10, which includes processor 1010 including DMA engine described in para. 21, 20, network interface 1070, storage device 1060, memory 1040, and bus 1016, and the bus can be a PCI express bus at Kida para. 23] Kida [0021] in some embodiments the DMA engine 126 and/or the MMIO engine 128 may be included in other components of the computing device 100 (e.g., the processor 120, Para. 23 The accelerator device 136 may be coupled to the processor 120 via a high-speed connection interface such as a peripheral bus (e.g., a PCI Express bus) As per claim 13, Kida discloses wherein the DMA engine is independent hardware; and [See Para. 28, which states that the DMA engine is part of the accelerator (this is independent from the processor) and the components can be hardware] wherein a processor of the computing device in which the DMA engine is located, [interpret as DMA engine is in computing device, not necessarily DMA engine is in processor] the DMA engine, a network interface card of the computing device, a storage device of the computing device, and a memory of the computing device are connected through a bus, and the bus comprises at least one of a peripheral component interconnect express (PCIe) bus, a compute express link (CXL) bus, and a unified bus (UB). [See Kida figure 3, which has DMA engine 220, which is part of the accelerator and the same accelerator can be found as hardware accelerator 1068 figure 10, and figure 10, which includes processor 1010, network interface 1070, storage device 1060, memory 1040, and bus 1016, and the bus can be a PCI express bus at Kida para. 23] As per claim 14, the claim(s) is/are directed to a computing device with limitations which correspond to limitations of claim 1, and is/are rejected for the reasons detailed with respect to claim 1. Claim 14 also recites, and Kida discloses A computing device, [computing device 100, figure 3] comprising a direct memory access (DMA) engine, [DMA engine 320, para. 38; DMA controller, para. 98] wherein the DMA engine is configured to:[ The DMA engine 320 is configured to transfer data, para. 38] [0038] The DMA engine 320 is configured to transfer data between the host memory buffer and the accelerator 136 buffer in response to the descriptor from the TEE 302. For a transfer from host to accelerator 136, transferring the data includes copying encrypted data from the host memory buffer and forwarding the plaintext data to the accelerator 136 buffer in response to decrypting the encrypted data. For a transfer from accelerator 136 to host, transferring the data includes copying plaintext data from the accelerator 136 buffer and forwarding encrypted data to the host memory buffer in response encrypting the plaintext data. Claims 2, 4-5, 8-10, 15, and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kida in view of Mencias, further in view of Shaw et al. U.S. Publication 20180288095 (hereinafter “Shaw”). As per claim 2, the rejection of claim 1 is incorporated herein. Kida discloses wherein the first computing device further comprises a processor and a memory, [processor/memory para. 15, 19-20; Processor 120, para 26] the processor separately runs a client application (CA) [untrusted software of the computing device, para. 17; virtual agent, para. 43; processor executes an application, para. 20; application executed, para. 27; ] and a trusted application (TA), [ TEE 510 further includes an application 514, para. 54] the memory comprises a shared memory and a CA-associated memory, [this is the non-TEE memory that the data is copied from and the TEE memory that the data is copied to, in the description of Kida para. 38 and 17]and that the operation of migrating the encrypted data to the TEE environment comprises: copying, by the DMA engine [DMA engine 320 is configured to transfer data between the host memory buffer and the accelerator 136 buffer in response to the descriptor from the TEE 302, para. 38] to the shared memory, [accelerator 236 performs a memory transfer, para. 17; this Kida memory should be inside the TEE ] the encrypted data stored in the CA-associated memory.[ host memory buffer, para. 38] Kida [0038] The DMA engine 320 is configured to transfer data between the host memory buffer and the accelerator 136 buffer in response to the descriptor from the TEE 302. For a transfer from host to accelerator 136, transferring the data includes copying encrypted data from the host memory buffer and forwarding the plaintext data to the accelerator 136 buffer in response to decrypting the encrypted data. For a transfer from accelerator 136 to host, transferring the data includes copying plaintext data from the accelerator 136 buffer and forwarding encrypted data to the host memory buffer in response encrypting the plaintext data. However, the combination of Kida and Mencias does not expressly disclose the shared memory is accessible to the TA Shaw discloses shared memory accessible to a trusted application [0022] Thus, TEE 104 may include a common repository 114 in which at least some data generated or stored by trusted applications 112 may be accessible to other trusted applications 112. Access to common repository 114 (and to certain data stored within common repository 114) may be controlled by a policy module 116. For example, policy module 116 may include certain policies (or rules) that restrict access to application data based on one or more factors, such as the identity of a source (e.g., trusted application 112 or external device) that stored application data in common repository 114, the identity of the element (e.g., trusted application 112 or external device) seeking access to the application data in common repository 114, a type of the application data, a location of the application data within common repository 114, or any other characteristic, such as a time of day, a network connection status, a battery power level, which trusted (or common) applications are actively running (as opposed to running in the background of device 100) or the like. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Kida and Mencias with the technique for a trusted application copying data from a common repository and performing operation on the copied information of Shaw to include the shared memory is accessible to the TA. One of ordinary skill in the art would have made this modification to improve the ability of the system to share data to and between trusted applications within the TEE. The system of the primary reference can be modified so that the trusted applications have access to and- share a repository for storing data and are able to access data from other trusted applications from within the repository according to access policies. As per claim 4, the rejection of claim 2 is incorporated herein. Kida discloses the decrypted data as argued above with respect to claim 1. However, the combination of Kida and Mencias does not expressly disclose wherein the memory further comprises a TA-associated memory, and the method further comprises: obtaining, by the TA, the decrypted data from the shared memory; storing, by the TA, the decrypted data in the TA-associated memory; and performing, by the TA, a data processing operation on the decrypted data, to obtain a data processing result. Shaw discloses a trusted application copying data from a common repository [para. 22] to a memory portion associated with the trusted application [when the trust application accesses the data in common repository, the trusted application must have the obtained data in memory (“each trusted application 112 may have a designated memory for storing its data”, para. 21)] and performing operation [make use of GPS information, para. 33; tracking user movement information, para. 23;] on the copied information [a data processing result can be disclosed by the results of the use of GPS/tracking user movement. the Shaw trusted application cannot do anything with data unless it has a copy of data in memory and therefore the data must be copied from the common repository to the memory designated to that trusted application according to Shaw para. 21-22] Shaw [0020] TEE 104 may include trusted applications 112 that operate and store data within TEE. For example, a trusted application 112 may be one that requires certain heightened security, …. The difference may be that trusted applications 112 operate in TEE 104, and common applications 108 operate in OS environment 102. Shaw [0021] Because of the secure nature of TEE 104, each trusted application 112 may have a designated memory for storing its data, such that only that trusted application 112 may access that memory designated to that trusted application 112. Restricting access to trusted-application data to the trusted application 112 from which it originated may have certain security advantages. However, within TEE 104, and with trusted external devices, it may be advantageous to share data generated by a first trusted application 112 with other trusted applications 112 or other external devices. [0022] Thus, TEE 104 may include a common repository 114 in which at least some data generated or stored by trusted applications 112 may be accessible to other trusted applications 112. Access to common repository 114 (and to certain data stored within common repository 114) may be controlled by a policy module 116. For example, policy module 116 may include certain policies (or rules) that restrict access to application data based on one or more factors, such as the identity of a source (e.g., trusted application 112 or external device) that stored application data in common repository 114, the identity of the element (e.g., trusted application 112 or external device) seeking access to the application data in common repository 114, a type of the application data, a location of the application data within common repository 114, Shaw [0023] a user may indicate that any global positioning system (GPS) information received on device 100 may be shared with another trusted application 112, such as one that facilitates a particular wearable device of the user. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Kida and Mencias with the technique for a trusted application copying data from a common repository and performing an operation on the copied information of Shaw to include wherein the memory further comprises a TA-associated memory, and the method further comprises: obtaining, by the TA, the decrypted data from the shared memory; storing, by the TA, the decrypted data in the TA-associated memory; and performing, by the TA, a data processing operation on the decrypted data, to obtain a data processing result. One of ordinary skill in the art would have made this modification to improve the ability of the system to share data between trusted applications within the TEE. The system of the primary reference can be modified so that the trusted applications share a repository for storing data and are able to access data from other trusted applications from within the repository according to access policies. As per claim 5, the rejection of claim 4 is incorporated herein. Kida discloses that DMA is involved in obtaining the decrypted data [The DMA engine 320 is configured to transfer data between the host memory buffer and the accelerator 136 buffer in response to the descriptor from the TEE 302… transferring the data includes copying encrypted data from the host memory buffer and forwarding the plaintext data to the accelerator 136 buffer in response to decrypting the encrypted data, para 38] However, the combination of Kida and Mencias does not expressly disclose wherein the obtaining the decrypted data from the shared memory comprises: obtaining, by the TA, the decrypted data from the shared memory in a DMA manner. Shaw discloses obtaining, by the TA, the decrypted data from the shared memory [see rejection of claim 4] It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Kida and Mencias with the technique for a trusted application copying data from a common repository [para. 22] and performing operation [make use of GPS information, para. 33; tracking user movement information, para. 23] on the copied information of Shaw to include wherein the obtaining the decrypted data from the shared memory comprises: obtaining, by the TA, the decrypted data from the shared memory in a DMA manner. [in a DMA manner is disclosed because DMA is involved in obtaining the decrypted data as taught in the Kida reference] One of ordinary skill in the art would have made this modification to improve the ability of the system to share data between trusted applications within the TEE. The system of the primary reference can be modified so that the trusted applications share a repository for storing data and are able to access data from other trusted applications from within the repository according to access policies. As per claim 8, the rejection of claim 4 is incorporated herein. Kida discloses wherein the first computing device further comprises a network interface card, [[0094] Example 1 includes an apparatus comprising a network interface card (NIC), ] which runs a TA in a secure state,[ [0054] As illustrated, the TEE 510 further includes an application 514] and the method further comprises: performing, by the TA in the secure state, [encrypting a data item generated by application 514 to generate an encrypted data item, para. 56 ] an encryption operation on the data processing result.[This is disclosed by the encrypting of data item generated by the application in the TEE for RDMA transaction as part of a secure communications channel] Kida [0054] As illustrated, the TEE 510 further includes an application 514. Kida [0056] Platform 500 also includes a NIC 520, which may be comparable to NIC 150 discussed above. As shown in FIG. 5, NIC 520 includes a cryptographic engine 513 comprising an encryptor/decryptor 515 and a cryptographic engine 523 comprising an encryptor/decryptor 525. The cryptographic engine 513 includes encryptor/decryptor 515 that may be configured to perform a cryptographic operation associated with a data transfer transaction, such as a remote direct memory access (RDMA) transaction. For an RDMA transaction, the cryptographic operation includes encrypting a data item generated by application 514 to generate an encrypted data item, or decrypting a data item sent to application 514 to generate a decrypted data item. The cryptographic engine 523 is configured to enable protected data transfer between an application and networked devices via its components. In one embodiment, encryptor/decryptor 525 may be configured to perform cryptographic operations to secure a communications channel between NIC 520 and other platforms. However, the combination of Kida and Mencias does not expressly disclose copying, by the TA, the data processing result to a storage area associated with the TA in the secure state in the network interface card; and Shaw discloses copying, by the TA, [share data generated by a first trusted application 112 with other trusted applications 112, para. 21] the data processing result to a storage area associated with the TA in the secure state Shaw [0021] Common application 108 and trusted applications 112 may generate or store data within device 100. For example, common applications 108 may store data in (or accessible through) OS environment 102, while trusted applications 112 may store data in (or accessible through) TEE 104. Because of the secure nature of TEE 104, each trusted application 112 may have a designated memory for storing its data, such that only that trusted application 112 may access that memory designated to that trusted application 112. Restricting access to trusted-application data to the trusted application 112 from which it originated may have certain security advantages. However, within TEE 104, and with trusted external devices, it may be advantageous to share data generated by a first trusted application 112 with other trusted applications 112 or other external devices. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Kida and Mencias with the technique for copying data to a storage area associated with the trusted application of Shaw to include copying, by the TA, the data processing result to a storage area associated with the TA in the secure state in the network interface card; and One of ordinary skill in the art would have made this modification to improve the ability of the system to allow the trusted application to have a copy of the data for processing. The system of the primary reference can be modified to allow the trusted application to access data and to have a copy for processing by the trusted application. As per claim 9, the rejection of claim 8 is incorporated herein. Kida discloses wherein the performing the encryption operation on the data processing result comprises: performing, by the TA in the secure state, the encryption operation on the data processing result [For an RDMA transaction, the cryptographic operation includes encrypting a data item generated by application 514 to generate an encrypted data item, para. 56 ]in a process of receiving the data processing result, to obtain an encrypted data processing result; and the method further comprises: sending, by the TA in the secure state,[ enable protected data transfer between an application and networked devices via its components …. to secure a communications channel between NIC 520 and other platforms, para. 56] the encrypted data processing result to a second computing device. Kida [0054] As illustrated, the TEE 510 further includes an application 514. Kida [0056] Platform 500 also includes a NIC 520, which may be comparable to NIC 150 discussed above. As shown in FIG. 5, NIC 520 includes a cryptographic engine 513 comprising an encryptor/decryptor 515 and a cryptographic engine 523 comprising an encryptor/decryptor 525. The cryptographic engine 513 includes encryptor/decryptor 515 that may be configured to perform a cryptographic operation associated with a data transfer transaction, such as a remote direct memory access (RDMA) transaction. For an RDMA transaction, the cryptographic operation includes encrypting a data item generated by application 514 to generate an encrypted data item, or decrypting a data item sent to application 514 to generate a decrypted data item. The cryptographic engine 523 is configured to enable protected data transfer between an application and networked devices via its components. In one embodiment, encryptor/decryptor 525 may be configured to perform cryptographic operations to secure a communications channel between NIC 520 and other platforms. As per claim 10, the rejection of claim 8 is incorporated herein. Kida discloses wherein the network interface card further comprises a CA [untrusted software of the computing device, para. 17; virtual agent, para. 43; applications, programs para. 20;] in a non-secure state; and wherein the CA in the non-secure state [storage devices store applications executed by processor cores, para. 89; applications executed by processor, para 90] and the TA in the secure state are run [application executed by the computing device 100 in a secure enclave, para. 27; application 514 operating in TEE 510 (or host) to generate packets (e.g., RDMA). Para..63] in a processor of the network interface card, and resources used by the CA in the non-secure state and the TA in the secure state to transmit data are isolated [as seen in Kida figure 10, wireless I/O interface 1020 and the wired I/O interface 1030 are resources used by the both types of applications and are isolated from each other; also, the network interface 1070 is isolated from the other two interfaces 1020 and 1030 in figure 10; Also, in figure 5, the cryptographic engine 513 and cryptographic engine 523 are isolated from each other and are used to encrypt remote communications as described in para. 56]. As per claim 15, the claim(s) is/are directed to a computing device with limitations which correspond to limitations of claim 2, and is/are rejected for the reasons detailed with respect to claim 2. As per claim 17, the claim(s) is/are directed to a computing device with limitations which correspond to limitations of claim 4, and is/are rejected for the reasons detailed with respect to claim 4. As per claim 18, the claim(s) is/are directed to a computing device with limitations which correspond to limitations of claim 5, and is/are rejected for the reasons detailed with respect to claim 5. Claim 3 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kida in view of Mencias, further in view of Bursell et al. U.S. Publication 20210157904 (hereinafter “Bursell”). As per claim 3, the rejection of claim 1 is incorporated herein. Kida discloses wherein the performing the decryption operation on the encrypted data during the operation of migrating the encrypted data to the TEE, to obtain the decrypted data comprises: performing the decryption operation on the encrypted data in a sequence of obtaining the encrypted data and other encrypted data [para. 17 describes multiple DMA transfers. Because there are multiple transactions, there will be a sequence of obtaining the encrypted data and other encrypted data] [0017] a trusted execution environment (TEE) established by the processor 120 securely communicates data with the accelerator 136. Data may be transferred using memory-mapped I/O (MMIO) transactions or direct memory access (DMA) transactions. However, the combination of Kida and Mencias does not expressly disclose and based on an identity key associated with the encrypted data, to obtain the decrypted data. Bursell discloses utilizing a decryption key to decrypt data [0078] Verification module 318 may decrypt wrapped key 114 using the candidate key and compare the decrypted verification code to an expected value to determine whether the candidate key correctly decrypted wrapped key 114. In another example, the verification code may be encrypted with the content and therefore may be embedded in content 102. Verification module 318 may unwrap the wrapped key using the candidate key and then use the unwrapped key to decrypt the verification code and content 102. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Kida and Mencias with the technique for utilizing a decryption key to decrypt data of Bursell to include and based on an identity key associated with the encrypted data, to obtain the decrypted data. One of ordinary skill in the art would have made this modification to improve the ability of the system to utilize a decryption key to decrypt data. The system of the primary reference can be modified to decrypt a decryption key and then utilize the decryption key to decrypt data, as taught in the Bursell reference. As per claim 16, the claim(s) is/are directed to a computing device with limitations which correspond to limitations of claim 3, and is/are rejected for the reasons detailed with respect to claim 3. Claims 6-7 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kida in view of Mencias, further in view of Shaw, further in view of Wei et al. PCT Publication 2020052579 (hereinafter “Wei”)(machine translation). As per claim 6, the rejection of claim 4 is incorporated herein. However, the combination of Kida, Mencias, and Shaw does not expressly disclose wherein the first computing device further comprises a storage device, which comprises a TA namespace, and the method further comprises: storing, by the TA, a first intermediate result in the TA namespace, wherein the first intermediate result comprises an intermediate result of performing the data processing operation on the encrypted data. Wei discloses encrypting data received at a security chip and storing the encrypted data in a namespace of storage device [page 10, para. 2] of a storage [page 16, para. 9], which discloses the limitations as follows: wherein the first computing device [the computing device (e.g., terminal device) with the security chip, page 7, third paragraph from bottom ] further comprises a storage device, [Page 16, para. 9 a storage unit 1006] which comprises a TA namespace, [namespace, page 10, para. 2 ] and the method further comprises: storing, by the TA, [security chip stores Page 8, para. 7; Data is stored Page 8, para. 4 ] a first intermediate result [first intermediate result can be the result of encrypting data Page 8, para. 4 or the result of sending the data to the security chip page 8, para. 5] in the TA namespace, [namespace, page 10, para. 2 ] wherein the first intermediate result comprises an intermediate result of performing the data processing operation [encrypting data Page 8, para. 4 and /or sending the data to the security chip page 8, para. 5] on the encrypted data. Page 4, second paragraph from bottom Referring to FIG. 2, FIG. 2 is … operating system 2 and user management (User Manager) are located in a rich execution environment (REE, Rich Execution Environment), and trusted applications are located in a trusted execution environment (TEE, Trust Execution Environment). The embodiment of this application uses TEE The hardware is called a security chip, which is responsible for storing encrypted data and verifying decrypted data. Wei Page 8, para. 4 security chip stores in advance the first encrypted data corresponding to the first operating system and the second encrypted data corresponding to the second operating system. … terminal device collects the first encrypted data corresponding to the first operating system and the second encrypted data corresponding to the second operating system, and encrypts the first encrypted data and the second encrypted data. Data is stored. This can be achieved in the following ways: page 8, para. 5 Method 1: Collect first encrypted data corresponding to the first operating system on a first display area, and the first operating system sends the first encrypted data to a security chip; and collect the second encrypted area on the second display area. Second encrypted data corresponding to a second operating system, the second operating system sending the second encrypted data to the security chip; the security chip performing the first encrypted data and the second encrypted data storage. Page 8, para. 7 Method 3: Use a sound collection device to collect the first encrypted data corresponding to the first operating system, and the first operating system sends the first encrypted data to a security chip; use the sound collection device to collect the second A second encrypted data corresponding to an operating system, and the second operating system sends the second encrypted data to the security chip; the security chip stores the first encrypted data and the second encrypted data. Wei Page 8, para. 9 After the security chip obtains the first encrypted data and the second encrypted data, it compares the first decrypted data with the first encrypted data. If the first decrypted data is consistent with the first encrypted data, the first A decrypted data check succeeds; the second decrypted data is compared with a second encrypted data, and if the second decrypted data is consistent with the second encrypted data, the second decrypted data is verified successfully. page 10, para. 2 The operating system 2 and the operating system 2 respectively transmit the obtained first unlock data and the second unlock data to the TEE, that is, the security chip. …. A namespace can be adopted in the Android operating system. Page 16, para. 9 The apparatus further includes: a storage unit 1006, configured to store the first encrypted data and the second encrypted data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Kida, Mencias, and Shaw with the technique for encrypting data received at a security chip and storing the encrypted data in a namespace of a storage of Wei to include wherein the first computing device further comprises a storage device, which comprises a TA namespace, and the method further comprises: storing, by the TA, a first intermediate result in the TA namespace, wherein the first intermediate result comprises an intermediate result of performing the data processing operation on the encrypted data. One of ordinary skill in the art would have made this modification to improve the ability of the system to store processed data. The system of the primary reference, as modified, can be further modified to process data, such as transferring and/or encrypting data, and having a trusted application storing the data in a namespace in a storage. As per claim 7, the rejection of claim 6 is incorporated herein. However, the combination of Kida, Mencias, and Shaw does not expressly disclose wherein the storage device comprises a controller, and the method further comprises: before storing the first intermediate result in the TA namespace, performing, by the controller, an encryption operation on the first intermediate result, to obtain encrypted data of the first intermediate result. Wei discloses the security chip [disclosing controller] encrypting the data before storing [see citations in claim 6] It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Kida, Mencias, and Shaw with the technique for encrypting data received at a security chip and storing the encrypted data in a namespace of a storage of Wei to include wherein the storage device comprises a controller, and the method further comprises: before storing the first intermediate result in the TA namespace, performing, by the controller, an encryption operation on the first intermediate result, to obtain encrypted data of the first intermediate result. One of ordinary skill in the art would have made this modification to improve the ability of the system to store processed data. The system of the primary reference, as modified, can be further modified to process data, such as encrypting data, and having a trusted application storing the data in a namespace in a storage. As per claim 19, the claim(s) is/are directed to a computing device with limitations which correspond to limitations of claim 6, and is/are rejected for the reasons detailed with respect to claim 6. As per claim 20, the claim(s) is/are directed to a computing device with limitations which correspond to limitations of claim 7, and is/are rejected for the reasons detailed with respect to claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HOWARD H LOUIE whose telephone number is 571-272-0036. The examiner can normally be reached on Monday-Friday 9 AM-5 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung W. Kim can be reached on 571-272-3804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HOWARD H. LOUIE/Examiner, Art Unit 2494 /JUNG W KIM/Supervisory Patent Examiner, Art Unit 2494
Read full office action

Prosecution Timeline

Aug 23, 2024
Application Filed
Dec 22, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591657
METHOD FOR ACQUIRING IDENTITY AUTHENTICATION INFORMATION, APPARATUS, STORAGE MEDIUM AND SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12579230
Multi-Factor Authentication with Increased Security
2y 5m to grant Granted Mar 17, 2026
Patent 12579262
SYSTEMS AND METHODS FOR NEUTRALIZING MALICIOUS CODE WITH NESTED EXECUTUION CONTEXTS
2y 5m to grant Granted Mar 17, 2026
Patent 12574413
SYSTEMS AND METHODS FOR IMPLEMENTING A FAMILY POLICY USING A COOPERATIVE SECURITY FABRIC
2y 5m to grant Granted Mar 10, 2026
Patent 12547425
LIBRARY IDENTIFICATION IN APPLICATION BINARIES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+59.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 181 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month