DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-20 are presented for examination. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on February 13, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 2, 3, 5, 8, 12, 13, 15, and 18 are objected to because of the following informalities: Claims 2 and 12: “wherein the two set” should read “wherein the two sets” Claims 5 and 15: it is unclear how an interface can be generated in code; it appears in light of the specification that the claim should read “generating, in the first part of code and second part of code, code corresponding to an interface…” Claims 8 and 18: “establishing, in response to that the trusted execution environment passes verification” should read “establishing, in response to determining that the trusted environment passes verification” Claims 3 and 13 are objected to due to dependency on claims 2 and 12 respectively Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because it is directed to a computer program product that comprises machine-executable instructions, therefore it is directed to software per se. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”). Claim 1 Step 1: The claim recites a method; therefore, it is directed to the statutory category of processes. Step 2A Prong 1 : The claim recites: - “dividing the neural network model into multiple parts, wherein the multiple parts comprise a first part for processing an input to the neural network model and a second part for receiving an output from the first part” : This limitation could encompass mentally dividing the neural network into multiple parts by making a mental determination of which nodes should comprise a first part for processing an input and which nodes should comprise a second part for receiving an output from the first part - “ converting, based on syntax for a trusted execution environment, a first part of code in source code of the neural network model and corresponding to the first part ” : This limitation could encompass mentally converting a first part of code in source code of the neural network model by comparing the syntax of the source code to syntax for a trusted execution environment and mentally performing a conversion Step 2A Prong 2 : This judicial exception is not integrated into a practical application. The claim further recites “compiling the converted first part of code and a second part of code in the source code and corresponding to the second part”. However, this limitation recites insignificant extra-solution activity ( MPEP § 2106.05( g ) ) because it is merely a necessary precursory step for generating a neural network model and does not impose a meaningful limit on the claim . The claim also recites “arranging the compiled first part of code and the compiled second part of code respectively in the trusted execution environment and an untrusted execution environment for generating the neural network model”. However, this is merely indicating a technological environment in which to apply a judicial exception ( MPEP § 2106.05( h ) ), because the limitation merely recites arranging the neural network model that was created through the use of judicial exception s in a technological environment for generation of the neural network model. Step 2B: The claim does not contain significantly more than the judicial exception. The compiling code limitation , in addition to being insignificant extra-solution activity, is also well-understood, routine, and conventional ( Sid Lakhdar et al. (US20220147442) , [0121] : “The module 64 is capable, for example in a conventional manner, of compiling this intermediate source code in order to obtain the executable code 76” ) . The recitation of arranging the compiled first and second parts of code in the trusted and untrusted execution environments is merely indicating a technological environment in which to apply a judicial exception as stated above. As an ordered whole, the claim is directed to a mentally performable process of dividing a neural network model and converting a first part of code in source code based on syntax for a trusted execution environment . Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 2 Step 1: A process, as above. Step 2A Prong 1: The claim recites: - “determining a computational graph of the neural network model, wherein the computational graph comprises multiple operators” : This limitation could encompass mentally determining a computational graph of the neural network model having multiple operators - “dividing the multiple operators into two sets of operators based on security of the input, wherein the two set of operators correspond to the first part and the second part” : This limitation could encompass mentally dividing the multiple operators into two sets of operators by making a mental determination of which operators should comprise the first part and which operators should comprise the second part based on security of the input Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited in claim 2; see analysis of claim 1. Step 2B : This claim does not contain significantly more than the judicial exception. No further additional elements are recited in claim 2; see analysis of claim 1. Claim 3 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exception as claim 2. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 3 further recites “acquiring, from the source code, the first part of code corresponding to a first set of operators in the two sets of operators.” However, this limitation recites the insignificant extra-solution activity of mere data gathering ( MPEP § 2106.05( g ) ). Step 2B : This claim does not contain significantly more than the judicial exception. The acquiring code limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory ( MPEP § 2106.05(d) (II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 ). Claim 4 Step 1: A process, as above. Step 2A Prong 1: The claim recites: - “converting the first part of code based on a hard coding rule” : This limitation could encompass mentally converting the first part of code based on a hard coding rule Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited in claim 4; see analysis of claim 1. Step 2B : This claim does not contain significantly more than the judicial exception. No further additional elements are recited in claim 4; see analysis of claim 1. Claim 5 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exception as claim 1. Step 2A Prong 2 : This judicial exception is not integrated into a practical application. Claim 5 further recites “ generating, in the first part of code and the second part of code, an interface capable of communicating between the trusted execution environment and the untrusted execution environment .” However, this limitation recites the insignificant extra-solution activity of mere data gathering and output ( MPEP § 2106.05( g ) ) because the interface is generated to transmit data between the trusted and untrusted execution environments. Step 2B : This claim does not contain significantly more than the judicial exception. The generating an interface capable of communicating limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of receiving or transmitting data over a network ( MPEP § 2106.05(d) (II); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network) . Claim 6 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exception as claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 6 further recites “arranging, based on a programming language of the neural network model, a runtime environment in the trusted execution environment .” However, this is merely indicating a technological environment in which to apply a judicial exception ( MPEP § 2106.05( h ) ), because the limitation is merely further specifying the environment in which the neural network model that was created using judicial exceptions is arranged in. Step 2B: This claim does not contain significantly more than the judicial exception. The recitation of arranging a runtime environment in the trusted execution environment is merely indicating a technological environment in which to apply a judicial exception as stated above. Claim 7 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exception as claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 7 further recites “ acquiring configuration information of a multi-party computation frame for the trusted execution environment, wherein the configuration information comprises at least one of the following: multiple parties participating in computation of the neural network model; a runtime environment of the neural network model; an input address where the multi-party computation frame acquires an input from one of the multiple parties; or an output address where the multi- party computation frame returns a computation result to one of the multiple parties. ” However, this limitation recites the insignificant extra-solution activity of mere data gathering ( MPEP § 2106.05( g ) ). Step 2B : This claim does not contain significantly more than the judicial exception. The acquiring configuration information limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory ( MPEP § 2106.05(d) (II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 ). Claim 8 Step 1: A process, as above. Step 2A Prong 1: The claim recites: - “determining whether the trusted execution environment passes verification by each of the multiple parties” : This limitation could encompass mentally determining whether the trusted execution environment passes verification by each of the multiple parties Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 8 further recites “establishing, in response to that the trusted environment passes verification by each of the multiple parties, a secure communication channel between each of the multiple parties and the trusted execution environment.” However, this limitation recites the insignificant extra-solution activity of mere data gathering and output (MPEP § 2106.05( g ) ), because the secure communication channel is established for transmitting data between a party and the trusted execution environment. Step 2B : This claim does not contain significantly more than the judicial exception. The establishing a secure communication channel limitation, in addition to being insignificant extra-solution activity, is also well-understood, routine, and conventional ( Hopen et al. (US20060161970), [0032]: “For example, the client 313 may seek to establish a secure communication channel using any desired conventional security protocol, such as the Secure Socket Layers (SSL) protocol, the Hypertext Transfer Protocol Secure (HTTPS) protocol, (which employs the Secure Socket Layers (SSL) protocol), the Internet Protocol Secure protocol (IPSec), the SOCKet Secure (SOCKS) protocol, the Layer Two Tunneling Protocol (L2TP), the Secure Shell (SSH) protocol, or the Point-to-Point Tunneling Protocol (PPTP) ) . Claim 9 Step 1: A process, as above. Step 2A Prong 1: The claim recites: - “decrypting the encrypted data” : T his limitation could encompass mentally decrypting the encrypted data using a decryption key to mentally decode the encryption - “ …obtain an intermediate result” : This limitation could encompass mentally obtaining an intermediate result Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 9 further recites “acquiring encrypted data from the input address” and “transmitting the intermediate result to the second part so as to train the neural network model.” However, these limitations recite the insignificant extra-solution activity of mere data gathering (MPEP § 2106.05( g ) ). The claim also further recites “inputting the decrypted data into the first part of the neural network model.” However, this limitation recites mere instruction s to apply an exception using a generic computer programmed with a generic class of computer algorithm (MPEP § 2106.05(f)), because the claim recites inputting data to a generic neural network to perform the judicial exception of obtaining an intermediate result. Step 2B: This claim does not contain significantly more than the judicial exception. The acquiring data and transmitting the result limitations, in addition to being insignificant extra-solution activity, are also directed to the well-understood, routine, and conventional activity of receiving and transmitting data over a network (MPEP § 2106.05(d)(II); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). The inputting the data into the first part of the neural network model limitation is mere instructions to apply the exception for the same reasons given above. Claim 10 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exception as claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 10 further recites “returning, based on the configuration information, model parameters for the first part to the output address as the computation result.” However, this limitation recites the insignificant extra-solution activity of mere data gathering (MPEP § 2106.05( g ) ) . Step 2B: This claim does not contain significantly more than the judicial exception. The returning model parameters to the output address limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of receiving and transmitting data over a network ( MPEP § 2106.05(d) (II); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network) . Claims 11-19 Step 1: Claims 11-19 recite an electronic device, and therefore are directed to the statutory category of machine s. Step 2A Prong 1 : Claims 11-19 recite the same judicial exception as claims 1-9, respectively . Step 2A Prong 2: The judicial exception is not integrated into a practical application. The analysis at this step mirrors that of claims 1-9 respectively, except insofar as c laims 11-19 additionally recite “an electronic device, comprising: at least one processor; and a memory coupled to the at least one processor and having instructions stored therein, wherein the instructions, when executed by the at least one processor, cause the electronic device to execute actions comprising: [the method].” However, this limitation recites mere instructions to apply an exception using a generic computer (MPEP § 2106.05(f)). Step 2B: The judicial exception is not integrated into a practical application. The analysis at this step mirrors that of claims 1-9 respectively, except insofar as claims 11-19 additionally recite t he electronic device limitation , which is mere instructions to apply an exception using a generic computer for the same reasons given above. Claim 20 Step 1: Claim 20 recites a computer program product that comprises computer-executable instructions , and therefore is directed to software per se. However, for the purpose s of the abstract idea rejection, the examiner will assume that it is directed to the statutory category of articles of manufacture. Step 2A Prong 1 : Claim 20 recites the same judicial exception as claim 1. Step 2A Prong 2: The judicial exception is not integrated into a practical application. The analysis at this step mirrors that of claim 1, except insofar as claim 20 additionally recites “ A computer program product that is tangibly stored on a non-transitory computer- readable medium and comprises machine-executable instructions, wherein the machine- executable instructions, when executed by a machine, cause the machine to execute the following : [the method].” However, this limitation recites mere instructions to apply an exception using a generic computer (MPEP § 2106.05(f)). Step 2B: The judicial exception is not integrated into a practical application. The analysis at this step mirrors that of claim 1, except insofar as claim 20 additionally recites the computer program product limitation , which is mere instructions to apply an exception using a generic computer for the same reasons given above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim s 1, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. ( WO2021143466 ) (“Lu”) in view of Zwanzger (US12489621) and further in view of Hu et al. ( US20220343165 ) (“Hu”) . Regarding claim 1, Lu discloses “A method for generating a neural network model, comprising: dividing the neural network model into multiple parts, wherein the multiple parts comprise a first part for processing an input to the neural network model and a second part for receiving an output from the first part ( Lu , [0006] : the neural network model being sequentially divided into a first part neural network model and a second part neural network model … the method comprising … providing training sample data from a data provider to the current first part neural network model in the trusted execution environment to obtain an intermediate result; providing the intermediate result to the current second part neural network model in the untrusted execution environment to obtain a current prediction value ; the examiner notes that the training sample data corresponds to an “input to the neural network model” and the intermediate result corresponds to “an output from the first part” ) ; … and arranging … [a] first part of code and … [a] second part of code respectively in the trusted execution environment and an untrusted execution environment for generating the neural network model ” ( Lu, [0006] : the first part neural network model being located in a trusted execution environment of a first device, and the second part neural network model being located in an untrusted execution environment of a second device ; the examiner notes that the neural network model consists of code , [0111]: the program code itself, which can be read from a readable medium, can perform the functions of any of the above embodiments , therefore the code for the first part neural network model corresponds to a first part of code and the code for the second part neural network model corresponds to a second part of code ) . Lu does not appear to explicitly disclose the further limitations of the claim. However, Zwanzger discloses “converting, based on syntax for a trusted execution environment , a first part of code in source code of… [ a ] neural network model and corresponding to… [ a ] first part [of a neural network model] ” ( Zwan z ger, (43): Any content, in particular data to be protected, which can also be configured in the form of code, can be distributed to third-party devices in pre-encrypted form and securely converted there by means of obfuscated code for use within a secure execution environment, in particular a TEE and Zwan z ger, (17) : In some embodiments, the data (D) to be protected are configured as: Program code, Interpretable code, Parameterizations for algorithms, Numerical data and/or Weights of a neural network ; the examiner notes that the “ data to be protected ” corresponds to “a first part of code in source code of a neural network model” because it is weights of a neural network configured in the form of code , and converting the syntax of code to an obfuscated syntax for use within a TEE corresponds to “converting , based on syntax for a trusted execution environment, a first part of code” ) . Both Zwanzger and the instant application relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention , to modify the method disclosed by Lu to include converting, based on syntax for a trusted execution environment, a first part of code in source code of the neural network model and corresponding to the first part, as disclosed by Zwanzger, for the purpose of making sensitive code executable on a third-party device without disclosing the code itself ( see Zwanzger, (7) ). Neither Lu nor Zwanzger appear to disclose the compiling limitation, or that the arranged first and second part of code are compiled . However, Hu discloses “compiling… [ a ] first part of code and a second part of code in… source code and corresponding to… [ a ] second part” ( Hu, [0044]: Illustrated processing block 92 provides for compiling, by the first process, the first portion of the neural network into a first compilation result that is compatible with the first device and Hu, [0046]: Illustrated processing block 102 provides for compiling, by the second process, the second portion of the neural network into a second compilation result that is compatible with the second device ) . Both Hu and the instant application relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to modify the method disclosed by the combination of Lu and Zwanzger to include compiling the first part of code and a second part of code in the source code and corresponding to the second part as disclosed by Hu, and to modify the arranged first part of code and second part of code to be compiled, for the purpose of ensuring that the first part of code and second part of code are compatible for execution in their respective environments ( see Hu [0044] and [0046] ). Regarding claim 11, Lu discloses “An electronic device, comprising: at least one processor; and a memory coupled to the at least one processor and having instructions stored therein, wherein the instructions, when executed by the at least one processor, cause the electronic device to execute actions comprising: [the method]” ( Lu , [0020] : an electronic device is provided, comprising: one or more processors, and a memory coupled to the one or more processors, the memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform the neural network model training method as described above ) . The further limitations of the claim are identical to claim 1 and are rejected for the same reasons as claim 1 above. Regarding claim 20, Lu discloses “ A computer program product that is tangibly stored on a non-transitory computer-readable medium and comprises machine-executable instructions, wherein the machine-executable instructions, when executed by a machine, cause the machine to execute the following: [the method]” ( Lu , [0021]: a machine-readable storage medium is provided that stores executable instructions, which, when executed, cause the machine to perform the neural network model training method as described above ). The further limitations of the claim are identical to claim 1 and are rejected for the same reasons as claim 1 above. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwanzger and Hu , and further in view of Tucker et al. (US10860925) (“Tucker”) and Jung et al. (US20240073732) (“Jung”). Regarding claim 2, Lu as modified by Zwanzger and Hu do es not appear to disclose the further limitations of the claim. However, Tucker discloses “ wherein dividing the neural network model into multiple parts comprises: determining a computational graph of the neural network model ( Tucker, (18): For example, the request can identify a computational graph representing an inference for a particular neural network ) , wherein the computational graph comprises multiple operators ( Tucker, (7): The computational graph includes nodes connected by directed edges. Each node in the computational graph represents an operation ; the examiner notes that the nodes correspond to “multiple operators” because the nodes represent operations ) ; and dividing the multiple operators into two sets of operators … ( Tucker, (32): The system partitions the computational graph into multiple subgraphs (step 208). Each subgraph includes one or more nodes in the computational grap h; the examiner notes that the subgraphs correspond to “sets of operators” and that “multiple subgraphs” can be two sets ), wherein the two set of operators respectively correspond to… [a] first part and… [a] second part [of a neural network model] ” ( Tucker, (9): In some implementations, the operations represented in the computational graph are neural network operations ) . Tucker and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the method disclosed by the combination of Lu, Zwanzger, and Hu to include determining a computational graph of the neural network model, wherein the computational graph comprises multiple operators; and dividing the multiple operators into two sets of operators, wherein the two sets of operators respectively correspond to the first part and the second part as disclosed by Tucker , and would have been motivated to do so for the advantage of being able to more easily partition the neural network in computational graph form than the conventional neural network representation (s ee Tucker (7) ). Lu , Zwanzger, Hu, and Tucker do not appear to explicitly disclose that the multiple operators are divided “based on security of the input.” However, Jung discloses dividing a machine learning model “based on security of the input” ( Jung, [0135]: Referring to FIG. 8, the system 800 may assign a split AI/ML model for performing split inference in units of split layers according to requirements (calculation capability, latency, personal information protection) of services (object recognition, augmented reality) to a device and a server (base station) and [0186] : When the device adjusts the split point in the direction of the input layer, in order to preserve privacy for raw data, the split point may be adjusted only up to a layer which may go through an activation function at least once ; the examiner notes that adjusting the split point of the ML model reads on dividing the model, and splitting it in a way that preserves privacy for raw data reads on “based on security of the input ” ) . Jung and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the division of the multiple operators of the neural network disclosed by the combination of Lu, Zwanzger, Hu, and Tucker to be based on security of the input as disclosed by Jung, for the purpose of preserving the privacy of the input data ( see Jung [186] ). Claim 12 is a n electronic device claim corresponding to method claim 2 and is rejected for the same reasons as claim 2 above. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwanzger, Hu, Tucker, and Jung , and further in view of Jain et al. ( US20210158131 ) (“Jain”). Regarding c laim 3, Lu , Zwanzger, Hu, Tucker, and Jung do not appear to explicitly disclose the further limitations of the claim. However, Jain discloses “acquiring, from… source code… [ a ] first part of code corresponding to a first set of operators in… two sets of operators” ( Jain, [0054]: The partitioner 334 of the machine learning framework 332 may perform a first level of partitioning of the neural network operators from the input code 342 to identify neural network operators to be executed by the machine learning framework 332 and neural network operators to be sent to the compiler. The output of the partitioner 334 may be neural network operators included on a white list of neural network operators received from the compiler 330. The compiler 330 can be activated, for example, when the partitioner 334 identifies neural network operators from the input code 342 that are supported by the compiler 330 ; the examiner notes that the neural network operators to be executed and the neural network operators to be sent to the compiler correspond to “two sets of operators,” input code 342 corresponds to “source code,” and the neural network operators from the input code that are supported by the compiler correspond to “a first part of code corresponding to a first set of operators” and are “acquired” by the partitioner 334 ) . Jain and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to modify the combination of Lu, Zwanzger, Hu, Tucker, and Jung to include acquiring, from the source code, the first part of code corresponding to a first set of operators in the two sets of operators, as disclosed by Jain, for the purpose of preparing the first part of the partitioned code for compilation and execution on a particular device that is different from the device where the second part of code is executed ( see Jain; Abstract , last sentence ) . Claim 13 is a n electronic device claim corresponding to method claim 3 and is rejected for the same reasons as claim 3 above. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwanzger and Hu, and further in view of Ibrahim et al. ( CN114556344 ) (“Ibrahim”). Regarding claim 4, Lu as modified by Zwanzger and Hu discloses “converting the first part of code” (see rejection of claim 1) but does not appear to disclose the further limitations of the claim. However, Ibrahim discloses “converting… code based on a hard coding rule” ( Ibrahim, [n0042]: “The predefined set of encryption algorithms 218 can be defined by standards, by the manufacturer of the encryption coprocessor 206 or client device 106, or by the developer of client device 106. Therefore, the predefined set of encryption algorithms 218 can be hardcoded within the hardware, firmware, or software implementing the encryption coprocessor 206 and cannot be configured by the client application 203… the encrypted code 118 can be encrypted using one of the predefined encryption algorithms 218 ; the examiner notes that the encryption of code 118 corresponds to “converting code” and the hardcoded, predefined encryption algorithm used for the encryption corresponds to “hard coding rule” because an encryption algorithm is a rule for encrypting ) . Ibrahim and the instant application both relate to the use of trusted execution environment s to protect data and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the converting of the first part of code disclosed by the combination of Lu, Zwanzger, and Hu, to be based on a hard coding rule as disclosed by Ibrahim , and one would have been motivated to do so for the purpose of ensuring that the conversion rule follows predefined standards and does not change during execution ( see Ibrahim [n0042] ). Claim 14 is an electronic device claim corresponding to method claim 4 and is rejected for the same reasons as claim 4 above. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwanzger and Hu , and further in view of Jankly (US20210119765) . Regarding c laim 5, Lu as modified by Zwanzger and Hu disclose s the first part of code and the second part of code (see rejection of claim 1) but do es not appear to explicitly disclose the further limitations of the claim. However, Jankly discloses “generati ng, in … code, an interface capable of communicating between … [ a ] trusted execution environment and … [ a n] untrusted execution environment” ( Jankly, [0050]: An interface 322 supports communications between the trusted environment 102 and the untrusted environment 104 ; the examiner notes that the interface is implemented by code ([0071]: various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code) , therefore the existence of an interface that supports communications between the trusted and untrusted environments implies that code for the interface must have been generated in the program code first ) . Jankly and the instant application both relate to preserving privacy of data and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to modify the compiling of the converted first part of code and the second part of code disclosed by the combination of Lu, Zwanzger, and Hu to include generating, in the first part of code and the second part of code, an interface capable of communicating between the trusted execution environment and the untrusted execution environment as disclosed by Jankly, for the purpose of supporting the exchange of desired data between the trusted and untrusted environments while preserving the privacy of sensitive data exchanged between parties that do not trust each other ( see Jankly [0020] ) . Claim 15 is a n electronic device claim corresponding to method claim 5 and is rejected for the same reasons as claim 5 above. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwanzger and Hu , and further in view of Ghosn et al. ( NPL: “ Secured Routines: Language-based Construction of Trusted Execution Environments ” ) (“Ghosn”). Regarding claim 6, Lu , Zwanzger and Hu do not appear to explicitly disclose the further limitations of the claim. However, Ghosn discloses “arranging, based on a programming language… a runtime environment in… [ a ] trusted execution environment” ( Ghosn, 4.3: The third component of GOTEE is the runtime library that is statically linked to the enclave code. It consists of the Go runtime modified to run in an enclave, including its cooperative user-level thread scheduler and garbage collector, and extensions to allow trusted and untrusted code to cooperate ; the examiner notes that Go is the programming language the arrangement is based on, “Go runtime” corresponds to “runtime environment”, “enclave” corresponds to “trusted execution environment , and “modified to run in an enclave” reads on the runtime environment being arranged in a trusted execution environment ) . Ghosn and the instant application both relate to preserving privacy of data and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by the combination of Lu, Zwanzger, and Hu to include arranging, based on a programming language of the neural network model, a runtime environment in the trusted execution environment, as disclosed by Ghosn , and would have been motivated to do so for the purpose of providing confidentiality and integrity guarantees with only moderate performance overheads, compared to the high complexity and performance overheads of trusted execution environments alone ( see Ghosn , Abstract ). Claim 16 is a n electronic device claim corresponding to method claim 6 and is rejected for the same reasons as claim 6 above. Claim s 7 , 9, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwanzger and Hu, and further in view of ITU ( NPL: “ Technical guidelines for secure multi-party computation ” ). Regarding claim 7, Lu as modified by Zwanzger and Hu discloses the trusted execution environment and the neural network model (see rejection of claim 1), but does not appear to explicitly disclose the further limitations of the claim. However, ITU discloses “acquiring configuration information ( ITU 7.1: The configuration information usually includes the IP address, port and other information of multiple parties ) of a multi-party computatio n frame … (ITU 7.2: Figure 3 shows a technical framework of an MPC system) , wherein the configuration information comprises at least one of the following: multiple parties participating in [the] computation… an input address where the multi-party computation frame acquires an input from one of the multiple parties; or an output address where the multi-party computation frame returns a computation result to one of the multiple parties ( ITU 7.3: the coordinator generates the configuration information (including specific MPC protocols and IP address) according to the task and sends it to the data providers and computing nodes; the data providers process the input data according to the specified MPC protocols and send it to the designated computing nodes through the secure transmission channel; computing nodes collaboratively carry out the computation on input data according to the MPC functions and specified MPC protocols; computing nodes send the computation result to each result demander at the same time ; the examiner notes that the configuration information includes the IP address es of multiple parties, and the multiple parties comprise data providers that send data to the computing nodes and result demanders that receive the computation result from the computing nodes , therefore, at least one of the addresses is an input address where the multi-party computation frame acquires an input and at least one of the addresses is an output address where the multi-party computation frame returns a computation result) . ITU and the instant application both relate to multi-party computation and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to modify the computation of the neural network model within the trusted execution environment disclosed by the combination of Lu, Zwanzger, and Hu to include acquiring configuration information of a multi-party computation frame for the trusted execution environment, wherein the configuration information comprises at least one of the following: multiple parties participating in computation of the neural network model; a runtime environment of the neural network model; an input address where the multi-party computation frame acquires an input from one of the multiple parties; or an output address where the multi-party computation frame returns a computation result to one of the multiple parties as disclosed by ITU ( the examiner note s that ITU discloses three of the four configuration information options, which reads on “at least one” ) , and would have been motivated to do so for the purpose of allowing a set of parties to jointly compute their data without any information leakage beyond the computation result- protecting the data of collaborators ( see ITU 6.1 ). Claim 17 is an electronic device claim corresponding to method claim 7 and is rejected for the same reasons as claim 7 above. Regarding claim 9, Lu as modified by Zwanzger and Hu further discloses “ acquiring encrypted data … decrypting the encrypted data; inputting the decrypted data into the first part of the neural network model to obtain an intermediate result; and transmitting the intermediate result to the second part so as to train the neural network model ( Lu , [00 18 ] : the training sample data provided by the other data providers besides the first data provider is encrypted training sample data, and the device may further include: a data fusion unit, located in the trusted execution environment, which decrypts the training sample data provided by each of the other data providers, and performs data fusion on the training sample data at the first data provider and the decrypted training sample data from each of the other data providers; and the first model processing unit provides the data-fused training sample data to the current first part of the neural network model in the trusted execution environment to obtain intermediate results and [0006] providing the intermediate result to a current second partial neural network model ) . Lu, Zwanzger, and Hu do not appear to explicitly disclose that the encrypted data is acquired from the input address. However, ITU discloses “acquiring… data from the input address” ( ITU 7.1: The configuration information usually includes the IP address, port and other information of multiple parties and ITU 7.3: the coordinator generates the configuration information (including specific MPC protocols and IP address) according to the task and sends it to the data providers and computing nodes ; the data providers process the input data according to the specified MPC protocols and send it to the designated computing nodes through the secure transmission channel ; the examiner notes that the IP address of the data provider corresponds to the “input address” and the “input data” is acquired by the computing nodes from the input address ). ITU and the instant application both relate to multi-party computation and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the process of acquiring encrypted data disclosed by the combination of Lu, Zwanzger, and Hu, to acquire the encrypted data from the input address disclosed by ITU, for the purpose of ensuring the security and reliability of the data transmission by establishing secure transmission protocols between IP addresses ( see ITU 7.4.3 ). Claim 19 is an electronic device claim corresponding to method claim 9 and is rejected for the same reasons as claim 9 above. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwanzger, Hu, and ITU , and further in view of Gu et al. ( US 20200082270 ) (“Gu”) . Regarding claim 8, Lu , Zwanzger, Hu and ITU do not appear to explicitly disclose the further limitations of the claim. However, Gu discloses “ determining whether … [ a ] trusted execution environment passes verification by each of … multiple parties; and establishing, in response to that the trusted execution environment passes verification by each of the multiple parties, a secure communication channel between each of the multiple parties and the trusted execution environment ( Gu, [0050]: In order to establish the trust between training data contributors 110, 120 and the launched TEE 142, a security module 144 of the TEE 142 performs a remote attestation procedure. The attestation process can prove to the training data contributors 110, 120 that they are communicating with a secure TEE 142 established by a trusted processor and the code running within the TEE 142 is certified... After the remote attestation, the key provisioning servers run by the different training data contributors 110, 120 can create secure communication channels, e.g., secure transport layer security (TLS) communication channels, directly to the TEE 142 and provision their symmetric keys, which are used by the security module 144 for authenticating and decrypting the training data, to the TEE 142 ; the examiner notes that the “training data contributors” correspond to “multiple parties,” the “remote attestation procedure” corresponds to “determining whether a trusted environment passes verification by each of multiple parties,” and the TLS communication channels between training data contributors and the TEE (trusted execution environment) correspond to “a secure communication channel between each of the multiple parties and the trusted execution environment” ). Gu and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by the combination of Lu, Zwanzger, Hu, and ITU to include determining whether the trusted execution environment passes verification by each of the multiple parties; and establishing, in response to that the trusted execution environment passes verification by each of the multiple parties, a secure communication channel between each of the multiple parties and the trusted execution environment as disclosed by Gu , for the purpose of preserving training data privacy, denying poisoned data from illegitimate data sources, and generating accountable models ( see Gu [0025] ). Claim 18 is a n electronic device claim corresponding to method claim 8 and is rejected for the same reasons as claim 8 above. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Zwan z ger , Hu, and ITU, and further in view of Lal et al. ( US20240062102 ) (“Lal”) . Regarding claim 10, Lu as modified by Zwanzger, Hu , and ITU discloses “returning, based on the configuration information… [the output] to the output address as the computation result” ( ITU 7.1: The configuration information usually includes the IP address, port and other information of multiple parties and ITU 7.3: the coordinator generates the configuration information (including specific MPC protocols and IP address) according to the task and sends it to the data providers and computing nodes … computing nodes collaboratively carry out the computation on input data according to the MPC functions and specified MPC protocols; computing nodes send the computation result to each result demander at the same time ; the examiner notes that the IP address of the result demander party corresponds to the output address, and sending the computation result to the result demander corresponds to “returning, based on the configura