DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This action is in response to the arguments filed on 04/25/2025. Claims 1-20 are pending in the application and have been considered below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 10-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 10:
For Step 1, the claim is a method, so it does recite a statutory category of invention.
For Step 2A, Prong 1:
The claim recites the limitation of “dividing a trained neural network into a first portion comprising a first set of layers and a second portion comprising a second set of layers.” limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the “dividing” step from practically being performed in the human mind. This limitation is a mental process.
For Step 2A, Prong 2, the claim recites additional elements: storing the first portion, memory, first processing system, storing an application, obtain (i.e., .data gathering) a first intermediate output, obtain (i.e., .data gathering) a respective output, second processing system, operating the first portion, supplying the first intermediate output, executing in the secure element the second portion, and generating, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation
of the mobile device.
The additional elements of “memory, first processing system and second processing system “are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The “storing the first portion” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “storing an application” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “obtain (i.e., .data gathering) a first intermediate output” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “obtain (i.e., .data gathering) a respective output” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “operating the first portion” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The “supplying (i.e., transmitting) the first intermediate output” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “generating, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device” amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The “executing in the secure element the second portion” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
Step 2B
The additional elements of “operating the first portion,” “generating, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device” and “executing in the secure element the second portion” do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the Subject Matter Eligibility Guidance, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the “obtain (i.e., .data gathering) a first intermediate output step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “obtain (i.e., .data gathering) a respective output” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “supplying (i.e., transmitting) the first intermediate output” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “storing the first portion” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(g)). This appears to be well-understood, routine, conventional as evidenced by Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
Here the “storing an application” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(g)). This appears to be well-understood, routine, conventional as evidenced by Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of
“operating the first portion,” “generating, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device” and “executing in the secure element the second portion”
to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 11:
Claim 11 which incorporates the rejection of claim 10, recites further limitations such as “feeding the respective output …the application to obtain predictions, which are sent back as intermediate output information to the first processing system to be outputted as final information; taking as the first intermediate output an output of hidden layers of the neural network; or taking as the first intermediate output an output of an output layer, in particular a classifier, stored inside the application” that are part of the abstract idea.
The claim recites an additional element: an inference engine.
The “inference engine” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The additional element of “an inference engine” does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “an inference engine” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 12:
Claim 12, which incorporates the rejection of claim 10, recites further limitations such as “remotely delivering the model by a secure channel or a confidential channel to the secure element” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 13:
Claim 13, which incorporates the rejection of claim 12, recites further limitations such as “loading the model in the secure element using over-the-air (OTA) remote provisioning” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 14:
Claim 14, which incorporates the rejection of claim 13, recites further limitations such as “the model of the second portion encrypted with a given key specific to the secure element, the secure element being configured to decrypt with the given key the
second portion and to perform the executing the second portion” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 15:
Claim 15, which incorporates the rejection of claim 10, recites further limitations such as “a description of cells, connections, and weights and functions associated with the cells and the connections, of the second portion of the neural network” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 16:
Claim 16, which incorporates the rejection of claim 10, recites further limitations such as “supplying the first intermediate output is performed by a proxy application of the first processing system” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 17:
For Step 1, the claim is a non-transitory computer-readable media, so it does recite a statutory category of invention.
For Step 2A, Prong 1:
The claim recites the limitation of “divide a trained neural network into a first set of layers and a second portion comprising a second set of layers.” limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the “divide” step from practically being performed in the human mind. This limitation is a mental process.
For Step 2A, Prong 2, the claim recites additional elements: a non-transitory computer-readable media, storing instructions, executing a neural network, memory, storing the first portion, first processing system, storing an application, second processing system, operating the first portion, supply the first intermediate output, execute in the secure element the second portion, and “generate, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device” The additional elements of “non-transitory computer-readable media, neural network, first processing system and second processing system “are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The “storing instructions” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “executing a neural network” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The “store the first portion” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “store an application” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “operate the first portion” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The “obtain (i.e., .data gathering) a first intermediate output” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “obtain (i.e., .data gathering) a respective output” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “supply (i.e., transmitting) the first intermediate output” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “generate, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device” amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The “execute in the secure element the second portion” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
Step 2B
The additional elements of “non-transitory computer-readable media, neural network, first processing system and second processing system, executing a neural network, memory, operate the first portion, “generate, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device” and execute in the secure element the second portion” do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the Subject Matter Eligibility Guidance, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the “obtain (i.e., .data gathering) a first intermediate output step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “obtain (i.e., .data gathering) a respective output” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “supply (i.e., transmitting) the first intermediate output” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Here the “storing instructions” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(g)). This appears to be well-understood, routine, conventional as evidenced by Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
Here the “store the first portion” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(g)). This appears to be well-understood, routine, conventional as evidenced by Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
Here the “store an application” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(g)). This appears to be well-understood, routine, conventional as evidenced by Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of
“non-transitory computer-readable media, neural network, first processing system and second processing system, “generate, by supplying to the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device,” executing a neural network, memory, operating the first portion and executing in the secure element the second portion” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 18:
Claim 18 which incorporates the rejection of claim 17, recites further limitations such as “feeding the respective output …the application to obtain predictions, which are sent back as intermediate output information to the first processing system to be outputted as final information; taking as the first intermediate output an output of hidden layers of the neural network; or taking as the first intermediate output an output of an output layer, in particular a classifier, stored inside the application” that are part of the abstract idea.
The claim recites an additional element: an inference engine.
The “inference engine” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The additional element of “an inference engine” does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “an inference engine” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 19:
Claim 19, which incorporates the rejection of claim 17, recites further limitations such as “remotely delivering the model by a secure channel or a confidential channel to the secure element” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 20:
Claim 20, which incorporates the rejection of claim 19, recites further limitations such as “loading the model in the secure element using over-the-air (OTA) remote provisioning” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 5, 11, 15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh). In view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS).
As to claim 1, Loh discloses an apparatus for operating a neural network comprising a set of neural network layers, the apparatus comprising:
a first processing system executing a first portion of the neural network comprising a first subset of the set of neural network layers to obtain a first intermediate output (paragraphs [0090]- [0091] …. A portion or sub-volume of the input volume is provided to block 410…The intermediate output of block 411 is provided to concatenate block 722…; wherein using the broadest reasonable interpretation, Examiner interprets “block 410” and “block 411” to teach the limitation); and
a second processing system, external to the first processing system, configured to receive as input the first intermediate output of the first portion, and configured to execute a second portion of the neural network comprising a second subset of the set of neural network layers, obtaining a respective output (paragraphs [0094]- [0095] …Block 510 includes layers 520 and 530. Additional layers may also be included, as needed.
…. A portion or sub-volume of the input volume is provided to block 510…. The intermediate output of block 510 is provided to concatenated block 722…; wherein using the broadest reasonable interpretation, Examiner interprets “block 520” and “block 510” to teach the limitation); and
a second processing system, external to the first processing system, configured to:
receive as input the first intermediate output of the first portion, in a sequential flow from the first processing system (paragraphs [0013]-[0014 An ANN model that has been partitioned based on model parallelism divides the ANN model into a sequence of ANN model partitions and runs one ANN model partition on each hardware component. For example, each ANN model partition may include a number of ANN model layers. Input data is provided to the first ANN model partition in the sequence, and the intermediate results from the first ANN model partition are provided as input to the second ANN model partition, and so on”), and
execute a second portion of the neural network comprising a second subset of the set of neural network layers to obtain a respective output (paragraphs [0094]- [0095] Block 510 includes layers 520 and 530. Additional layers may also be included, as needed. A portion or sub-volume of the input volume is provided to block 510…. The intermediate output of block 510 is provided to concatenated block 722…; wherein using the broadest reasonable interpretation, Examiner interprets “block 520” and “block 510”);
wherein the second processing system is configured to supply, to the first processing system, output information as a function of the respective output (paragraphs [0022]-[0023], at each input node, input data is provided to the activation function for that node, and the output of the activation function is then provided as an input data value to each hidden layer node. At each hidden layer node, the input data value received from each input layer node is multiplied by a respective connection weight, and the resulting products are summed or accumulated into an activation signal value that is provided to the activation function for that node. The output of the activation function is then provided as an input data value to each output layer node. At each output layer node, the output data value received from each hidden layer node is multiplied by a respective connection weight, and the resulting products are summed or accumulated into an activation signal value that is provided to the activation function for that node…; [0034] An activation function is then applied to the results of each convolution calculation to produce an output volume that is provided as an input volume to the subsequent layer. The activation function may be applied by each convolutional layer node or by the nodes of a subsequent locally-connected ReLu layer);
wherein the first processing system is configured to obtain, as a function of the output information, a final output of the neural network (paragraphs [0067]-[0070]…The entire input volume is provided to each HA-specific DNN model, and the output of each HA-specific DNN model is combined or "ensembled" to create the final output…Fig 6, element 702; Fig. 7A element 732 (Concatenated Output (i.e., Final Output).
However, Loh fails to explicitly teach wherein the second processing system includes a secure element storing a model of the second portion; and
wherein the second processing system is configured to execute the second portion of the neural network by applying the first intermediate output to the model of the second portion stored in the secure element to obtain the respective output.
BOS, in combination with Loh, teaches wherein the second processing system includes a secure element storing a model of the second portion (paragraphs
[0017], dividing the first machine learning model into a first portion and a second portion; inputting a plurality of inputs into the first machine learning model, and in response, a selected one of the first or second portions providing a first plurality of intermediate outputs; inputting the plurality of inputs into a second machine learning model; comparing the first plurality of intermediate outputs of the selected one of the first or second portions of the first machine learning model to a second plurality of intermediate outputs from a corresponding selected portion of the second machine learning model; and determining if the first plurality of intermediate outputs and the corresponding plurality of intermediate outputs match; [0018]-[0021] FIG. 1 illustrates ML model 10 in accordance with an embodiment. Machine learning model 10 is based on a neural network and includes a plurality of nodes organized as layers. In ML model 10, there is one input layer 13 including nodes 12, 14, 16, and 18, three hidden layers 15, 17, and 19 including nodes 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, and 40, and an output layer 21 including nodes 42 and 44…; [0028] For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78…Processor 76 may be implemented in a secure hardware element and may be tamper resistant; [0029] memory 78 may be implemented in a secure hardware element; and [0031] Memories 78 and 82 may store, for example, one or more machine learning models, or encryption, decryption, and verification applications. Memory 82 may be implemented in a secure hardware element and be tamper resistant. Examiner interprets “Processor 76 may be implemented in a secure hardware element and may be tamper resistant” as a second processing system.
According to applicant’s specification, in [0078] Secure Element is a tamper-resistant platform capable of securely hosting applications and their confidential and cryptographic data.
Thus, the “processor 76” represents the secure element, the “encryption, decryption, and verification applications” represents the applications and their confidential coupled with “processor 76 may execute the machine learning algorithms.” Therefore, the “processor 76” represents the secure element, the “encryption, decryption, and verification applications” represents the applications and their confidential coupled with “processor 76 may execute the machine learning algorithms” reads into “the second processing system includes a secure element storing a model”); and
wherein the second processing system is configured to execute the second portion of the neural network by applying the first intermediate output to the model of the second portion stored in the secure element to obtain the respective output (paragraphs [0028]-[0029] Processor 76 may be any hardware device capable of executing instructions stored in memory 78 or instruction memory 82. For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78. Processor 76 may have multiple processing cores. Processor 76 may be, for example, a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or similar device. Processor 76 may be implemented
in a secure hardware element and may be tamper resistant. Examiner interpretation is based on paragraphs [0078]- [0080] of the original disclosure.)
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Loh, to add a secure element to the system of Loh, as taught by BOS, above. The modification would have been obvious because one of ordinary skill would be motivated to improve the effectiveness of the ML model influenced by its accuracy, execution time, storage requirements, and the quality of the training data, as suggested by BOS ([0003] and [0027]).
As to claim 2, which incorporates the rejection of claim 1, Loh teaches wherein in the secure element is stored an application comprising the model of the second portion executable by the second processing system (paragraphs [0046]-[0048]…. Computer programs or modules, such as operating system 132, software modules 134, etc., stored within memory 130. For example, software modules 134 may include an ML application, an ANN application, a DNN application, a CNN application, an RNN application, etc.…; wherein using the broadest reasonable interpretation, Examiner interprets “software modules” stored within memory 130 to include an application).
As to claim 5, which incorporates the rejection of claim 1, Loh teaches wherein the model of the second portion includes an output layer, in particular a classifier (paragraphs [0023] …output of the activation function is then provided as an input data value to each output layer node. At each output layer node, the output data value
received from each hidden layer node is multiplied by a respective connection weight, and the resulting products are summed or accumulated into an activation signal value that is provided to the activation function for that node. The output of the activation function is then provided as output data…; [0031]-[0032]… classification layer 50, etc., and output layer 60…; [0045]…classification-based machine learning models, such as, for example, ANNs, DNNs, CNNs, RNNs, SVM, Naive Bayes etc. ).
As to claim 10, Loh discloses a method for executing a neural network comprising a set of layers, the method comprising:
dividing a trained neural network into a first portion comprising a first set of layers and a second portion comprising a second set of layers (paragraphs [0013]-[0015]…ANN model that has been partitioned based on model parallelism divides the ANN model into a sequence of ANN model partitions and runs one ANN model partition on each hardware component. For example, each ANN model partition may include a number of ANN model layers. Input data is provided to the first ANN model partition in the sequence, and the intermediate results from the first ANN model partition are provided as input to the second ANN model partition, and so on; [0065] …layers 320 may be divided among two or more blocks 310);
storing the first portion in a memory accessible by a first processing system, for operation by the first processing system (paragraphs [0046]-[0048]…. computer
programs or modules, such as operating system 132, software modules 134, etc., stored within memory 130. For example, software modules 134 may include an ML application, an ANN application, a DNN application, a CNN application, an RNN application, etc; wherein using the broadest reasonable interpretation, Examiner interprets “software modules” stored within memory 130 to include the limitation);
storing an application comprising a model of the second portion in a secure element associated with a second processing system external to the first processing system; operating the first portion obtaining a first intermediate output (paragraphs [0046]-[0048] Computer programs or modules, such as operating system 132, software modules 134, etc., stored within memory 130. For example, software modules 134 may include an ML application, an ANN application, a DNN application, a CNN application, an RNN application, etc.…; wherein using the broadest reasonable interpretation, Examiner interprets “software modules” stored within memory 130 to include the limitation);
operating the first portion to obtain obtaining a first intermediate output (paragraphs [0090]- [0091] …. A portion or sub-volume of the input volume is provided to block 410…The intermediate output of block 411 is provided to concatenate block 722…; wherein using the broadest reasonable interpretation, Examiner interprets “block 410” and “block 411” to teach the limitation); and
a second processing system, external to the first processing system, configured to receive as input the first intermediate output of the first portion, and configured to execute a second portion of the neural network comprising a second subset of the set of neural network layers, obtaining a respective output (paragraphs [0094]- [0095] …Block 510 includes layers 520 and 530. Additional layers may also be included, as needed. A portion or sub-volume of the input volume is provided to block 510…. The intermediate output of block 510 is provided to concatenated block 722…; wherein using the broadest reasonable interpretation, Examiner interprets “block 520” and “block 510” to teach the limitation);
supplying the first intermediate output in a sequential flow from the first processing system as a direct intermediate input to the application comprising the model of the second portion in the secure element (paragraphs [0022]- [0023], at each input node, input data is provided to the activation function for that node, and the output of the activation function is then provided as an input data value to each hidden layer node. At each hidden layer node, the input data value received from each input layer node is multiplied by a respective connection weight, and the resulting products are summed or accumulated into an activation signal value that is provided to the activation function for that node. The output of the activation function is then provided as an input data value to each output layer node. At each output layer node, the output data value received from each hidden layer node is multiplied by a respective connection weight, and the resulting products are summed or accumulated into an activation signal value that is provided to the activation function for that node…; [0034] An activation function is then applied to the results of each convolution calculation to produce an output volume that is provided as an input volume to the subsequent layer. The activation function may be applied by each convolutional layer node or by the nodes of a subsequent locally-connected ReLu layer).
Loh fails to explicitly teach:
storing an application comprising a model of the second portion in a secure element associated with a second processing system external to the first processing system;
executing in the secure element the second portion to obtain a respective output;
and
generating, by the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device.
BOS, in combination with Loh, teaches storing an application comprising a model of the second portion in a secure element associated with a second processing system external to the first processing system (paragraphs [0015] The first and second machine learning models may be neural networks. The selected portions of the first and second plurality of portions may each include one or more layers of the plurality of layers. The selected portions of the first and second plurality of portions may each include one or more nodes of one or more layers of the plurality of layers. Examiner interprets the second machine learning model as a second processing system; [0017], dividing the first machine learning model into a first portion and a second portion; inputting a plurality of inputs into the first machine learning model, and in response, a selected one of the first or second portions providing a first plurality of intermediate outputs; inputting the plurality of inputs into a second machine learning model; comparing the first plurality of intermediate outputs of the selected one of the first or second portions of the first machine learning model to a second plurality of intermediate outputs from a corresponding selected portion of the second machine learning model; and determining if the first plurality of intermediate outputs and the corresponding plurality of intermediate outputs match; [0018]-[0021] FIG. 1 illustrates ML model 10 in accordance with an embodiment. Machine learning model 10 is based on a neural network and includes a plurality of nodes organized as layers. In ML model 10, there is one input layer 13 including nodes 12, 14, 16, and 18, three hidden layers 15, 17, and 19 including nodes 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, and 40, and an output layer 21 including nodes 42 and 44…; [0028] For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78…Processor 76 may be implemented in a secure hardware element and may be tamper resistant; [0029] memory 78 may be implemented in a secure hardware element; and [0031] Memories 78 and 82 may store, for example, one or more machine learning models, or encryption, decryption, and verification applications. Memory 82 may be implemented in a secure hardware element and be tamper resistant. Examiner interpretation is based on paragraphs [0078]- [0080] of the original disclosure); executing in the secure element the second portion to obtain a respective output (paragraphs [0028]-[0029] Processor 76 may be any hardware device capable of executing instructions stored in memory 78 or instruction memory 82. For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78. Processor 76 may have multiple processing cores. Processor 76 may be, for example, a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or similar device. Processor 76 may be implemented in a secure hardware element and may be tamper resistant. Examiner interpretation is based on paragraphs [0078]- [0080] of the original disclosure); and
generating, by the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device (paragraphs [0028]-[0029] Processor 76 may be any hardware device capable of executing instructions stored in memory 78 or instruction memory 82. For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78. Processor 76 may have multiple processing cores. Processor 76 may be, for example, a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or similar device. Processor 76 may be implemented in a secure hardware element and may be tamper resistant. Examiner interpretation is based on paragraphs [0078]- [0080] of the original disclosure.)
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Loh, to add a secure element to the system of Loh, as taught by BOS, above. The modification would have been obvious because one of ordinary skill would be motivated to improve the effectiveness of the ML model is influenced by its accuracy, execution time, storage requirements, and the quality of the training data, as suggested by BOS ([0003] and [0027]).
As to claim 11, which incorporates the rejection of claim 10, Loh discloses wherein the supplying to the first processing system the output information as the function of the respective output includes one of the following:
feeding the respective output to an inference engine of the application to obtain predictions, which are sent back as intermediate output information to the first processing system to be outputted as final information;
taking as the first intermediate output an output of hidden layers of the neural network (paragraphs [0094]- [0095] …Block 510 includes layers 520 and 530. Additional layers may also be included, as needed. …. A portion or sub-volume of the input volume is provided to block 510…. The intermediate output of block 510 is provided to concatenated block 722…; wherein using the broadest reasonable interpretation, Examiner interprets “block 520” and “block 510” to teach the limitation); or taking as the first intermediate output an output of an output layer, in particular a classifier, stored inside the application.)
As to claim 15, which incorporates the rejection of claim 10, Loh discloses wherein the model comprises a description of cells, connections, and weights and functions associated with the cells and the connections, of the second portion of the neural network (paragraphs [0022]-[0023] In a fully-connected, feedforward ANN, each node is connected to all of the nodes in the preceding layer, as well as to all of the nodes in the subsequent layer. For example, each input layer node is connected to each hidden layer node, each hidden layer node is connected to each input layer node and each output layer node, and each output layer node is connected to each hidden layer node. Additional hidden layers are similarly interconnected. Each connection has a weight value, and each node has an activation function, such as, for example, a linear function, a step function, a sigmoid function, a tanh function, a rectified linear unit (ReLu) function, etc., that determines the output of the node based on the weighted sum of the inputs to the node. The input data propagates from the input layer nodes, through respective connection weights to the hidden layer nodes, and then through respective connection weights to the output layer nodes.).
As to claim 17, Loh discloses a method for executing a neural network comprising a set of layers, when executed by a processor, cause the processor to:
divide a trained neural network into a first portion comprising a first set of layers and a second portion comprising a second set of layers (paragraphs [0013]-[0015]…ANN model that has been partitioned based on model parallelism divides the ANN model into a sequence of ANN model partitions and runs one ANN model partition on each hardware component. For example, each ANN model partition may include a number of ANN model layers. Input data is provided to the first ANN model partition in the sequence, and the intermediate results from the first ANN model partition are provided as input to the second ANN model partition, and so on; [0065] …layers 320 may be divided among two or more blocks 310);
store the first portion in a memory accessible by a first processing system, for operation by the first processing system (paragraphs [0046]-[0048]…. computer
programs or modules, such as operating system 132, software modules 134, etc., stored within memory 130. For example, software modules 134 may include an ML application, an ANN application, a DNN application, a CNN application, an RNN application, etc;);
storing an application comprising a model of the second portion in a secure element associated with a second processing system external to the first processing system; operating the first portion obtaining a first intermediate output (paragraphs [0046]-[0048] Computer programs or modules, such as operating system 132, software modules 134, etc., stored within memory 130. For example, software modules 134 may include an ML application, an ANN application, a DNN application, a CNN application, an RNN application, etc.…; wherein using the broadest reasonable interpretation, Examiner interprets “software modules” stored within memory 130 to include the limitation);
operate the first portion to obtain a first intermediate output (paragraphs [0090]- [0091] …. A portion or sub-volume of the input volume is provided to block 410…The intermediate output of block 411 is provided to concatenate block 722); and
a second processing system, external to the first processing system, configured to receive as input the first intermediate output of the first portion, and configured to execute a second portion of the neural network comprising a second subset of the set of neural network layers, obtaining a respective output (paragraphs [0094]- [0095] …Block 510 includes layers 520 and 530. Additional layers may also be included, as needed. A portion or sub-volume of the input volume is provided to block 510…. The intermediate output of block 510 is provided to concatenated block 722…);
supplying the first intermediate output in a sequential flow from the first processing system as a direct intermediate input to the application comprising the model of the second portion in the secure element (paragraphs [0022]- [0023], at each input node, input data is provided to the activation function for that node, and the output of the activation function is then provided as an input data value to each hidden layer node. At each hidden layer node, the input data value received from each input layer node is multiplied by a respective connection weight, and the resulting products are summed or accumulated into an activation signal value that is provided to the activation function for that node. The output of the activation function is then provided as an input data value to each output layer node. At each output layer node, the output data value received from each hidden layer node is multiplied by a respective connection weight, and the resulting products are summed or accumulated into an activation signal value that is provided to the activation function for that node…; [0034] An activation function is then applied to the results of each convolution calculation to produce an output volume that is provided as an input volume to the subsequent layer. The activation function may be applied by each convolutional layer node or by the nodes of a subsequent locally-connected ReLu layer).
Loh fails to explicitly teach:
store an application comprising a model of the second portion in a secure element associated with a second processing system external to the first processing system;
execute in the secure element the second portion to obtain a respective output;
and
generate, by the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device.
BOS, in combination with Loh, teaches storing an application comprising a model of the second portion in a secure element associated with a second processing system external to the first processing system (paragraphs [0015] The first and second machine learning models may be neural networks. The selected portions of the first and second plurality of portions may each include one or more layers of the plurality of layers. The selected portions of the first and second plurality of portions may each include one or more nodes of one or more layers of the plurality of layers. Examiner interprets the second machine learning model as a second processing system; [0017], dividing the first machine learning model into a first portion and a second portion; inputting a plurality of inputs into the first machine learning model, and in response, a selected one of the first or second portions providing a first plurality of intermediate outputs; inputting the plurality of inputs into a second machine learning model; comparing the first plurality of intermediate outputs of the selected one of the first or second portions of the first machine learning model to a second plurality of intermediate outputs from a corresponding selected portion of the second machine learning model; and determining if the first plurality of intermediate outputs and the corresponding plurality of intermediate outputs match; [0018]-[0021] FIG. 1 illustrates ML model 10 in accordance with an embodiment. Machine learning model 10 is based on a neural network and includes a plurality of nodes organized as layers. In ML model 10, there is one input layer 13 including nodes 12, 14, 16, and 18, three hidden layers 15, 17, and 19 including nodes 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, and 40, and an output layer 21 including nodes 42 and 44…; [0028] For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78…Processor 76 may be implemented in a secure hardware element and may be tamper resistant; [0029] memory 78 may be implemented in a secure hardware element; and [0031] Memories 78 and 82 may store, for example, one or more machine learning models, or encryption, decryption, and verification applications. Memory 82 may be implemented in a secure hardware element and be tamper resistant. Examiner interpretation is based on paragraphs [0078]- [0080] of the original disclosure);
execute in the secure element the second portion to obtain a respective output (paragraphs [0028]-[0029] Processor 76 may be any hardware device capable of executing instructions stored in memory 78 or instruction memory 82. For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78. Processor 76 may have multiple processing cores. Processor 76 may be, for example, a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or similar device. Processor 76 may be implemented in a secure hardware element and may be tamper resistant. Examiner interpretation is based on paragraphs [0078]- [0080] of the original disclosure); and
generate, by the first processing system, a final output of the neural network based on the output information received from the secure element as a function of the respective output, wherein the first processing system uses the final output to control operation of the mobile device (paragraphs [0028]-[0029] Processor 76 may be any hardware device capable of executing instructions stored in memory 78 or instruction memory 82. For example, processor 76 may execute the machine learning algorithms using training data stored in memory 78. Processor 76 may have multiple processing cores. Processor 76 may be, for example, a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or similar device. Processor 76 may be implemented in a secure hardware element and may be tamper resistant. Examiner interpretation is based on paragraphs [0078]- [0080] of the original disclosure.)
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Loh, to add a secure element to the system of Loh, as taught by BOS, above. The modification would have been obvious because one of ordinary skill would be motivated to improve the effectiveness of the ML model is influenced by its accuracy, execution time, storage requirements, and the quality of the training data, as suggested by BOS ([0003] and [0027]).
As to claim 18, which incorporates the rejection of claim 17, Loh discloses wherein the supplying to the first processing system the output information as the function of the respective output includes one of the following:
feeding the respective output to an inference engine of the application to obtain predictions, which are sent back as intermediate output information to the first processing system to be outputted as final information;
taking as the first intermediate output an output of hidden layers of the neural network (paragraphs [0094]- [0095] …Block 510 includes layers 520 and 530. Additional layers may also be included, as needed. …. A portion or sub-volume of the input volume is provided to block 510…. The intermediate output of block 510 is provided to concatenated block 722…; wherein using the broadest reasonable interpretation, Examiner interprets “block 520” and “block 510” to teach the limitation); or taking as the first intermediate output an output of an output layer, in particular a classifier, stored inside the application.)
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Chen et al. (US 2019/0073586 A1, hereinafter referred to as Chen).
As to claim 3, which incorporates the rejection of claim 2, Loh and BOS fail to explicitly teach wherein the application includes a command to feed the first intermediate output to the model of the second portion.
However, Chen, in combination with Loh and BOS, teaches a command to feed the first intermediate output to the model of the second portion (paragraphs [0005] …the computing device may define a first intermediate output of the first module in the
series of multiple (NN) modules, and feed forward this first intermediate output to the first input of the second (e.g., a subsequent or a next in sequence) module in the series of multiple modules, where the same process may be applied to the second module to define a second intermediate output of the second module.)
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a “feed command” to the system of Loh, as taught by Chen, above. The modification would have been obvious because one of ordinary skill would be motivated to process large amounts of data to reduce the data size and place the data in a format more suitable for further processing within the NN ML model, as suggested by Chen ([0005]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Se et al. (US 2021/0227126 A1, hereinafter referred to as Se).
As to claim 4, which incorporates the rejection of claim 2, Loh and BOS fail to explicitly teach wherein the application includes an inference engine receiving the respective output and outputting predictions.
Se, in combination with Loh and BOS, teaches wherein the application includes an inference engine receiving the respective output and outputting predictions (paragraph [0033] ... training of an inference engine…training of an inference engine…Captured images 160 (i.e., respective output) are fed into the trained inference network 150, which outputs classification predictions and confidence for each image. The imaging device may then use the image classification prediction to determine further image processing actions.).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a “feed command” to the combination system of Loh and BOS, as taught by Se, above. The modification would have been obvious because one of ordinary skill would be motivated to use a neural network optimized for image classification and segmentation on a mobile device, as suggested by Se ([0033]).
Claim 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over
Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Hilton (US 2009/0089794 A1, hereinafter referred to as Hilton).
As to claim 6, which incorporates the rejection of claim 1, Loh and BOS fail to explicitly teach wherein the first processing system include a further proxy application which is configured to operate as an interface to the second processing system and the secure element, obtaining the first intermediate output and supplying it to the second processing system and the secure element, and receiving the output information as the function of the respective output from the second processing system.
Hilton, in combination with Loh and BOS, teaches wherein the first processing system include a further proxy application which is configured to operate as an interface to the second processing system and the secure element, obtaining the first intermediate output and supplying it to the second processing system and the secure element, and receiving the output information as the function of the respective output from the second processing system (paragraph [0053]-[0056]…first data processing system 300-1 extends its services to the second data processing system 300-2 via the proxy task 308…. proxy task 308 has the primary responsibility for external communication with other related second tasks 304 on the second data processing system 300-2…first network interface 410 and second network interface 412, first I/O device interface 414 and second I/O device interface 416…).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a “proxy application” to the combination system of Loh and BOS as taught by Hilton, above. The modification would have been obvious because one of ordinary skill would be motivated to have the proxy task 308 communicate with each other via communication mechanism 322 in a manner specific to the needs of the application, as suggested by Hilton ([0053]).
As to claim 16, which incorporates the rejection of claim 10, Loh and BOS fail to explicitly teach wherein the supplying the first intermediate output is performed by a proxy application of the first processing system.
Hilton, in combination with Loh and BOS, teaches wherein the supplying the first intermediate output is performed by a proxy application of the first processing system (paragraph [0053]-[0056]…first data processing system 300-1 extends its services to the second data processing system 300-2 via the proxy task 308…. proxy task 308 has the primary responsibility for external communication with other related second tasks 304 on the second data processing system 300-2…first network interface 410 and second network interface 412, first I/O device interface 414 and second I/O device interface 416…).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a “proxy application” to the combination system of Loh and BOS, as taught by Hilton, above. The modification would have been obvious because one of ordinary skill would be motivated to have the proxy task 308 communicate with each other via communication mechanism 322 in a manner specific to the needs of the application, as suggested by Hilton ([0053]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over
Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Twitchell et al. (US 11,108,702 B1, hereinafter referred to as Twitchell).
As to claim 7, which incorporates the rejection of claim 2, Loh and BOS fail to explicitly teach wherein the application comprising the model of the second portion includes a velocity mechanism which limits a number of executions performable by the application to a given limit number of executions, in particular includes a counter set to the given limit number of executions, the application comprising the model of the second portion being configured to stop when the counter reaches the given limit number of executions.
Twitchell, in combination with Loh and BOS, teaches wherein the application comprising the model of the second portion includes a velocity mechanism which limits a number of executions performable by the application to a given limit number of executions, in particular includes a counter set to the given limit number of executions, the application comprising the model of the second portion being configured to stop when the counter reaches the given limit number of executions (col. 2, lines 10-17..the command document specifies one or more limitations on execution of the commands. In an embodiment, the limitations include a velocity parameter that limits the number of computer system instances to which the configuration may be applied concurrently. In an embodiment, the limitations include an error threshold that stops the application of the configuration if the number of configuration failures meets or exceeds the error threshold….; col. 5, lines 23-35…In an embodiment, the set of parameters include a velocity parameter that limits the number of virtual computer system instances to which a set of commands can be concurrently provided for execution…).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a “velocity mechanism” to the combination system of Loh and BOS, as taught by Twitchell, above. The modification would have been obvious because one of ordinary skill would be motivated to limit the number of computer system instances to which the configuration may be applied concurrently, as suggested by Twitchell (col. 2, lines 12-14).
Claim 8, 9, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Park et al. (US 2021/0289360 A1, hereinafter referred to as Park).
As to claim 8, which incorporates the rejection of claim 1, Loh and BOS fail to explicitly teach wherein the secure element is one of:
a Universal Integrated Circuit Card (UICC) (paragraph [0057] and [0062]);
an embedded UICC (eUICC) (paragraphs [0041] and [0062]));
an embedded Secure Element (eSE) (paragraph [0039]); or
a removable memory card.
Park, in combination with Loh and BOS, teaches wherein the secure element is one of:
a Universal Integrated Circuit Card (UICC) (paragraph [0057] and [0062]);
an embedded UICC (eUICC) (paragraphs [0041] and [0062]));
an embedded Secure Element (eSE) (paragraph [0039]); or
a removable memory card.
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a Universal Integrated Circuit Card (UICC) to the combination system of Loh and BOS, as taught by Park, above. The modification would have been obvious because one of ordinary skill would be motivated to use secure mobile communication, as suggested by Park, ([0060]).
As to claim 9, which incorporates the rejection of claim 1, Loh and BOS fail to explicitly teach wherein the first processing system is a processor of a mobile device and the second processing system comprising the secure element is an integrated card in the mobile device.
Park, in combination with Loh and BOS, teaches wherein the first processing system is a processor of a mobile device and the second processing system comprising the secure element is an integrated card in the mobile device (paragraphs [0029], wherein using the broadest reasonable interpretation, Examiner interprets smartphones, mobile phones and mobile medical devices to include a processor; [0057] and [0062] Universal Integrated Circuit Card (UICC)).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add an integrated card to the combination system of Loh and BOS, as taught by Park, above. The modification would have been obvious because one of ordinary skill would be motivated to use secure mobile communication, as suggested by Park, ([0060]).
As to claim 12, which incorporates the rejection of claim 10, Loh and BOS fail to explicitly teach wherein storing the application comprising the model of the second portion in the secure element includes remotely delivering the model by a secure channel or a confidential channel to the secure element.
Park, in combination with Loh and BOS, teaches wherein storing the application comprising the model of the second portion in the secure element includes remotely delivering the model by a secure channel or a confidential channel to the secure element (paragraphs [0086] A secure channel may be established between the SM-DP+ 210 and the eUICC 130. As an example, the secure channel may be used during a period when a profile is downloaded and installed. Furthermore, the secure channel may be used in connection with transmitting a profile between the SM-DP+ 210 and the terminal 100. The terminal 100 may deliver a profile package to the eUICC 130.)
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a secure channel to the combination system of Loh and BOS, as taught by Park, above. The modification would have been obvious because one of ordinary skill would be motivated to use secure mobile communication, as suggested by Park, ([0060]).
As to claim 19, which incorporates the rejection of claim 10, Loh and BOS fail to explicitly teach wherein storing the application comprising the model of the second portion in the secure element includes remotely delivering the model by a secure channel or a confidential channel to the secure element.
Park, in combination with Loh and BOS, teaches wherein storing the application comprising the model of the second portion in the secure element includes remotely delivering the model by a secure channel or a confidential channel to the secure element (paragraphs [0086] A secure channel may be established between the SM-DP+ 210 and the eUICC 130. As an example, the secure channel may be used during a period when a profile is downloaded and installed. Furthermore, the secure channel may be used in connection with transmitting a profile between the SM-DP+ 210 and the terminal 100. The terminal 100 may deliver a profile package to the eUICC 130.
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add a secure channel to the combination system of Loh and BOS, as taught by Park, above. The modification would have been obvious because one of ordinary skill would be motivated to use secure mobile communication, as suggested by Park, ([0060]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over
Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Kumar et al. (US 2010/0088188 A1, hereinafter referred to as Kumar).
As to claim 13, which incorporates the rejection of claim 12, Loh and BOS fail to explicitly teach wherein storing the application comprising the model of the second portion in the secure element includes loading the model in the secure element using over-the-air (OTA) remote provisioning.
Kumar, in combination with Loh and BOS, teaches wherein storing the application comprising the model of the second portion in the secure element includes loading the model in the secure element using over-the-air (OTA) remote provisioning (paragraphs [0005]-[0006], over-the-air (OTA) virtual card transfer between NFC-enabled mobile devices… OTA provisioning server).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh and BOS, to add an over-the-air (OTA) remote provisioning to the combination system of Loh and BOS, as taught by Kumar, above. The modification would have been obvious because one of ordinary skill would be motivated to provide virtual card transfer between near field communications (NFC)-enabled mobile devices, as suggested by Kumar, ([0006]).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over
Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Kumar et al. (US 2010/0088188 A1, hereinafter referred to as Kumar and Dean et al. (US 2020/0314644 A1, hereinafter referred to as Dean).
As to claim 14, which incorporates the rejection of claim 13, Kumar, in combination with Loh, teaches wherein an OTA server loads in the secure element the application comprising the model of the second portion encrypted with a given key specific to the secure element (paragraph []0010] …. secure element may include any type of hardware or combination of hardware and software that utilizes encryption or similar means for securing designated data within a mobile device…; [0145] …. use encryption
key in a portable communication device to be updated via an access device such as a POS terminal, the portable communication device need not be in long range over-the air
communication with a remote provisioning server computer).
However, Loh, BOS and Kumar fail to explicitly teach the secure element being configured to decrypt with the given key the second portion and to perform the executing the second portion.
Dean, in combination with Loh, BOS and Kumar, teaches the secure element being configured to decrypt with the given key the second portion and to perform the executing the second portion (paragraph [0043] …keys may include encryption and decryption keys. Keys may also be symmetric or asymmetric. A cryptographic algorithm can be an encryption algorithm that transforms original data into an alternate representation, or a decryption algorithm that transforms encrypted information back to the original data….).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh, BOS and Kumar, to add encryption to the combination system of to the combination system of Loh, BOS and Kumar, as taught by Dean, above. The modification would have been obvious because one of ordinary skill would be motivated to use encryption key in a portable communication device to be updated via an access device, as suggested by Dean ([0145]).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over
Loh et al. (US 2021/0192337 A1, hereinafter referred to as Loh), in view of BOS et al. (US 2021/0019661 A1, hereinafter referred to as BOS), and further in view of Park et al. (US 2021/0289360 A1, hereinafter referred to as Park), and Kumar et al. (US 2010/0088188 A1, hereinafter referred to as Kumar).
As to claim 20, which incorporates the rejection of claim 19, Loh, BOS and Park fail to explicitly teach wherein storing the application comprising the model of the second portion in the secure element includes loading the model in the secure element using over-the-air (OTA) remote provisioning.
Kumar, in combination with Loh, BOS and Park, teaches wherein storing the application comprising the model of the second portion in the secure element includes loading the model in the secure element using over-the-air (OTA) remote provisioning (paragraphs [0005]- [0006] …over-the-air (OTA) virtual card transfer between NFC-enabled mobile devices… OTA provisioning server).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Loh, BOS and Park to add an over-the-air (OTA) remote provisioning to the combination system of Loh, BOS and Park, as taught by Kumar, above. The modification would have been obvious because one of ordinary skill would be motivated to provide virtual card transfer between near field communications (NFC)-enabled mobile devices, as suggested by Kumar, ([0006]).
Response to Applicant’s arguments
Applicant's arguments on file on 04/25/2025 with respect to the rejections of claims 1-20 have been considered and are moot in view of new ground(s) of rejection for the 103 rejections.
Prior Art Rejections
Applicant’s arguments are moot in view of new ground(s) of rejection: BOS et al. (US 2021/0019661 A1).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABABACAR SECK whose telephone number is (571)270-7146. The examiner can normally be reached Monday-Friday 8:00 A.M.-6:00 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lamardo Viker can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABABACAR SECK/Examiner, Art Unit 2122
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147