Prosecution Insights
Last updated: April 19, 2026
Application No. 18/479,894

SYSTEM AND METHOD FOR ENCRYPTING MACHINE LEARNING MODELS

Final Rejection §103
Filed
Oct 03, 2023
Examiner
CARNES, THOMAS A
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Technology Innovation Institute - Sole Proprietorship LLC
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
47 granted / 70 resolved
+9.1% vs TC avg
Strong +73% interview lift
Without
With
+73.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
95
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
24.7%
-15.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 70 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to the communication filed on 12/15/2025. Claims 1-20 are pending. Claims 1, 8 and 15 have been amended. Claims 1-20 are rejected. The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 1-20, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's remaining arguments filed 12/15/2025 have been fully considered but they are not persuasive. The amendments are directed to executing the model using general hardware circuitry. Using hardware circuitry to execute encrypted comparisons does not meaningfully limit the claims. The amended claim language executing (a model) as hardware circuitry and implementing the operations using hardware circuitry does not limit the scope of the claims to exclude software being executed by hardware. The broad scope of the claims does reasonably include software instructions which are ultimately executed by hardware. The remaining amendments reword and clarify the claims but do not change the scope of the claims. Therefore, the amendments, as written, do not add significantly more and do not overcome the current rejection, they only required minor changes to the cited paragraphs. Is the inventive concept non-programmable hardware? Hardware which performs functions which software cannot perform? Is the decision tree model a physical structure? If so, what are the structural elements? The claims are drafted broadly enough to encompass conventional hardware executing software. Note: Please consider Moon (US 8165386) Published 4-24-2012 regarding “decision tree hardware architecture” Moon appears to teach a similar method and inventive concept with similar benefits as recited in applicant’s disclosure. At [Col 4 line 40-50] Moon discloses: “It is an objective of the present invention to efficiently handle complex computational tasks, including human behavior and demographic analysis, from the video source, utilizing the novel and flexible hardware architecture in a preferred embodiment.” Maps to Applicant’s argument on page 7 of remarks “operations to be performed directly in hardware at the internal nodes of the decision tree, thereby providing computationally efficient confidential inference that is accessible to edge devices.” Dependent claims are also rejected for inheriting the deficiencies of the independent claims set forth above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 8-13 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over by Vald (U.S. 20200382273), in view of Yoshino (U.S. 20190318104). Regarding claims 1, 8 and 15, Vald discloses: A method, comprising: initiating, by a server system comprising a host device, an as hardware circuitry on an accelerator coupled with the host device, the (Vald [0003, 0017-0020, 0024, 0032, 0065-0067] teaches the user (client) device [0003, 0022-0032, 0044-0051] teaches obtaining encrypted data from a client device, which is encrypted by using FHE, and processing it using a machine learning model (which can be a tree-based model) which perform operations on the encrypted data to make an encrypted prediction which is then sent back to the client device [0064-0065] teaches that the method can be implemented using various accelerators) receiving, by the host device, an input, from the user device, to be evaluated using the (Vald [0003,0017, 0022-0032, 0044-0051, 0065-0067] teaches the machine learning device obtains encrypted data, wherein the encrypted data is encrypted using fully homomorphic encryption, which is the agreed upon encryption schema) evaluating, by the hardware circuitry of the accelerator implementing the by performing the encrypted comparison operations at the internal nodes; (Vald [Abstract, 0003, 0017, 0022-0032, 0044-0051, 0065-0067] teaches that the machine learning device performs at least one computation on the encrypted data while the encrypted data remains encrypted) generating, by the hardware circuitry of the accelerator implementing the (Vald [0003, 0017, 0022-0032, 0044-0051, 0065-0067] teaches “One example method generally includes obtaining, at a computing device, encrypted data, wherein the encrypted data is encrypted using fully homomorphic encryption and performing at least one computation on the encrypted data while the encrypted data remains encrypted”; [0064-0065] teaches using an FPGA accelerator) Vlad does not explicitly disclose: wherein the hardware circuitry implements encrypted comparison operations at internal nodes of the encrypted decision tree model, the encrypted comparison operations comparing encrypted threshold values with encrypted input values; encrypted decision tree However, in the same field of endeavor Yoshino teaches: wherein the hardware circuitry implements encrypted comparison operations at internal nodes of the encrypted decision tree model, the encrypted comparison operations comparing encrypted threshold values with encrypted input values; encrypted decision tree (Yoshino [0035-0038] teaches comparable encryption (In the embodiments, the encrypted data is represented by E( )); [0039-0042, 0053-0059, 0065, 0078-0089, 0105-0111] teaches an encrypted determination module which can perform learning (generate a decision tree) while keeping the learning and analysis secret (operations are performed at nodes of the tree and the tree along with the data remains encrypted; [Fig. 12A, Fig. 16A] shows examples of encrypted decision trees) Vlad and Yoshino are analogous art because they are from the same field of endeavor secure tree based learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Vlad and Vlad before him or her, to modify the method of Vlad to include the encrypted comparisons using encrypted decision trees of Yoshino because it will improve data secrecy at learning and analysis phases of decision tree analysis. The motivation for doing so would be [“there is a possibility that data for learning contains sensitive information, which requires secrecy of the data to be improved even in a learning phase of a decision tree analysis. Therefore, one embodiment of this invention has an object to improve secrecy of data in a learning phase of a decision tree analysis.”] (Paragraph 0005-0006, 0070 by Yoshino)]. Therefore, it would have been obvious to combine Vlad and Yoshino to obtain the invention as specified in the instant claim. Similar Claim 8 additionally discloses: A non-transitory computer readable medium comprising one or more sequences of instructions, which, when executed by one or more processors, causes a server system to perform operations comprising: (Vald [0013, 0064-0065] teaches non-transitory computer readable medium) Similar Claim 15 additionally discloses: A system, comprising: (Vald [0013 and 0019] teaches the functions of computing environment 100 may be performed by… a computing system) Regarding claims 2 and 9, Vald in view of Yoshino discloses: The method of claim 1, wherein the host device is one of a server, a cluster of servers, a cloud computing service, or an edge computing device, and (Vald [0003-0019, 0022-0032, 0044-0051] teaches that the host device can be cloud computing servers/services) wherein the accelerator is one of a field programmable gate array, graphics processing unit, or tensor processing unit. (Vald [0064-0065] teaches FPGA accelerator) Regarding claims 3, 10 and 16, Vald in view of Yoshino discloses: The method of claim 1, wherein the agreed upon encryption schema is a fully homomorphic encryption algorithm. (Vald [0003, 0022-0032, 0044-0051] teaches Regarding claims 4, 11 and 17, Vald in view of Yoshino discloses: The method of claim 3, wherein the (Vald [0003-0018, 0022-0032, 0044-0051] a tree-based machine learning model and encryption using public/private key pairs) Vlad does not explicitly disclose: encrypted decision tree However, in the same field of endeavor Yoshino discloses: encrypted decision tree (Yoshino [0035-003-0042, 0053-0059, 0065, 0078-0089, 0105-0111] teaches encrypted decision tree models (In the embodiments, the encrypted data is represented by E( )); [Fig. 12A, Fig. 16A] shows examples of encrypted decision trees) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Yoshino for similar reasons as cited in claim 1. Regarding claims 5, 12 and 18, Vald in view of Yoshino discloses: The method of claim 1, wherein the agreed upon encryption schema is an order-preserving cryptography schema. (Vald [0003-0019, 0022] teaches order preserving encryption schemes being used) Regarding claims 6, 13 and 19, Vald in view of Yoshino discloses: The method of claim 5, wherein the (Vald [0003-0032, 0044-0051] teaches a tree-based machine learning model and a key management service which uses the same keys thought the operations) Vlad does not explicitly disclose: encrypted decision tree However, in the same field of endeavor Yoshino discloses: encrypted decision tree (Yoshino [0035-003-0042, 0053-0059, 0065, 0078-0089, 0105-0111] teaches encrypted decision tree models (In the embodiments, the encrypted data is represented by E( )); [Fig. 12A, Fig. 16A] shows examples of encrypted decision trees) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Yoshino for similar reasons as cited in claim 1. Claims 7, 14 and 20, are rejected under 35 U.S.C. 103 as being unpatentable over by Vald (U.S. 20200382273), in view of Yoshino (U.S. 20190318104) and in further view of Dalli (U.S. 20220138532). Regarding claims 7, 14 and 20, Vald in view of Yoshino discloses: The method of claim 1, further comprising: Vald does not explicitly disclose: generating, by the server system, the encrypted decision tree model by encrypting computations performed at each internal node of the encrypted decision tree model; extracting, by the server system, decision rules performed from a root node of the encrypted decision tree model to each leaf node; and converting, by the server system, the decision rules into source code for upload to the accelerator. However, in the same field of endeavor Yoshino discloses: generating, by the server system, the encrypted decision tree model by encrypting computations performed at each internal node of the encrypted decision tree model; (Yoshino [0035-003-0042, 0053-0059, 0065, 0078-0089, 0105-0111] teaches encrypted decision tree models (In the embodiments, the encrypted data is represented by E( )); [Fig. 12A, Fig. 16A] shows examples of encrypted decision trees) (Yoshino [0006, 0091-0101] teaches identification and extraction of branching rules) Vald in view of Yoshino does not explicitly disclose: extracting, and converting, by the server system, the decision rules into source code for upload to the accelerator. However, in the same field of endeavor Dalli teaches: extracting, and converting, by the server system, the decision rules into source code for upload to the accelerator. Dalli [0058, 0062] teaches model interpretability which includes generalized rule-based format of decision trees; [0126, 0166] teaches that the extracted model and the rules may be applied to a hardware specification and may subsequently be used to output a hardware device and/or hardware circuit specification in a suitable output format such as the VHSIC Hardware Description Language (VHDL), Verilog, AHDL or other suitable hardware specification language) Vald in view of Yoshino and Dalli are analogous art because they are from the same field of endeavor decision tree learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Vald in view of Yoshino and Dalli before him or her, to modify the method of Vald in view of Yoshino to include the Verilog conversion of Dalli because it will allow the processing to be performed by hardware accelerators resulting in improved performance. The motivation for doing so would be [models may be easily implementable in hardware efficiently, leading to substantial speed and space improvements] (Paragraph 0045, 0126-0129 by Dalli)]. Therefore, it would have been obvious to combine Vald in view of Yoshino and Dalli to obtain the invention as specified in the instant claim. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kreinin 6/9/2016 (US 20170103022) teaches that a generated decision trees can be converted into programming code. Gentry 6/18/2019 (US 20200403781) teaches: using homomorphic encryption at the bit level. Verma 10/19/2018 (US 20200125739) teaches: Distributed machine learning employs a central fusion server that coordinates the distributed learning process. Preferably, each of set of learning agents that are typically distributed from one another initially obtains initial parameters for a model from the fusion server. Each agent trains using a dataset local to the agent. The parameters that result from this local training (for a current iteration) are then passed back to the fusion server in a secure manner, and a partial homomorphic encryption scheme is then applied. In particular, the fusion server fuses the parameters from all the agents, and it then shares the results with the agents for a next iteration. In this approach, the model parameters are secured using the encryption scheme, thereby protecting the privacy of the training data, even from the fusion server itself. Aggarwal 8/8/2023 (US 20240054353) teaches: According to an aspect, there is provided an apparatus comprising means for receiving, from a server, an authorization request for a federated learning operation, the authorization request identifying a plurality of user equipment, and means for determining, using subscription data associated with each of the plurality of user equipment, whether each of the plurality of user equipment are authorized to be used by the server for the federated learning operation. The apparatus also comprising means for, in response to determining that at least two of the plurality of user equipment are authorized, providing a message to each of the at least two of the plurality of user equipment that are authorized, each message comprising an encryption key associated with the federated learning operation. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS A CARNES whose telephone number is (571)272-4378. The examiner can normally be reached Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached at (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. THOMAS A. CARNES Examiner Art Unit 2436 /THOMAS A CARNES/Examiner, Art Unit 2436 /MOEEN KHAN/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
Sep 23, 2025
Non-Final Rejection — §103
Dec 15, 2025
Response Filed
Feb 13, 2026
Final Rejection — §103
Apr 14, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12556566
SYSTEMS AND METHODS FOR DYNAMIC VULNERABILITY SCORING
2y 5m to grant Granted Feb 17, 2026
Patent 12538130
SYSTEMS AND METHODS FOR RUNNING MULTIPLE LOGICAL SECURE ELEMENTS ON THE SAME SECURE HARDWARE
2y 5m to grant Granted Jan 27, 2026
Patent 12524566
RESTRICTED FULLY PRIVATE CONJUCTIVE DATABASE QUERY FOR PROTECTION OF USER PRIVACY AND IDENTITY
2y 5m to grant Granted Jan 13, 2026
Patent 12488141
SYSTEM AND METHOD FOR PRIVACY-PRESERVING DISTRIBUTED TRAINING OF NEURAL NETWORK MODELS ON DISTRIBUTED DATASETS
2y 5m to grant Granted Dec 02, 2025
Patent 12401525
VEHICLE TEMPORARY CERTIFICATE AUTHENTICATION
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+73.2%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 70 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month