Prosecution Insights
Last updated: April 19, 2026
Application No. 18/660,366

MACHINE LEARNING MODEL DEPLOYED TO AN ENCRYPTED COMPUTATIONAL GRAPH

Non-Final OA §102§112
Filed
May 10, 2024
Examiner
HENDERSON, ESTHER BENOIT
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
534 granted / 677 resolved
+20.9% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
14 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
40.5%
+0.5% vs TC avg
§102
27.6%
-12.4% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 677 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to a restriction requirement that was mailed October 17, 2025. An election was made without traverse, December 17, 2025, electing examination of group I, consisting of claims 1-15. Group II, consisting of claims 16-20 was NOT elected. Claims 16-20 have been cancelled. Claims 21-25 are newly added. Claims 1-15 and 21-25 are now pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on September 3, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites the limitation "the less". There is insufficient antecedent basis for this limitation in the claim. It also appears Applicant(s) did not complete the claimed limitation because the limitation does not make grammatical sense. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-15 and 21-25 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gu et al. (US 2023/0394324 A1). With respect to claim 1, Gu discloses one or more computer storage media comprising computer executable instructions that when executed by computing device performs a method of using an encrypted computational graph ([0003] and [0032], neural flow is encrypted and exchanged between user or tenant and a computing system), the method comprising: identifying, at a deployment server, a plurality of operational instructions needed for a machine-learning runtime environment to execute a machine-learning model ([0006], [0026], and Abstract, trained computer model is deployed to computing platform, and neural flow attestation engine verifies executing integrity of deployed model based on runtime neural flow); communicating, to a client device, the plurality of operational instructions to the machine-learning runtime environment ([0025], neural flows executed are logged and sent as an attestation report to tenants); receiving, from the machine-learning runtime environment on the client device, information associated with a trusted execution environment on the client device ([0049], user receives and maintains a measurement database to verify whether neural flows deviate (broken trust) from training model stored), wherein the information includes supported operational instructions for the trusted execution environment and an encryption key for the trusted execution environment ([0060], attestation generation engine may encrypt result with security key exchanged between user/tenant and TEE as part of TEE launch); building the encrypted computational graph for execution by the machine learning runtime environment using the supported operational instructions and the encryption key ([0026], neural flow model is generated); and communicating the encrypted computational graph to the client device ([0032], attested neural flow (model) is exchanged with user or tenant). With respect to claim 2, Gu discloses the media of claim 1, wherein the method further comprises: receiving a certificate for the trusted execution environment ([0032]; and prior to building the encrypted computational graph, validating the certificate ([0032]). With respect to claim 3, Gu discloses the media of claim 1, wherein the supported operational instructions describe machine-learning operations the trusted execution environment is capable of performing ([0060]). With respect to claim 4, Gu discloses the media of claim 1, wherein the information also includes an amount of memory available within the trusted execution environment ([0063]). With respect to claim 5, Gu discloses the media of claim 1, wherein a node in the encrypted computational graph includes a dedicated operational instruction that is dedicated to the trusted execution environment and encrypted machine-learning model data that is encrypted using the encryption key ([0020]). With respect to claim 6, Gu discloses the media of claim 5, wherein the encryption key a public key wherein the less ([0032]). With respect to claim 7, Gu discloses the media of claim 5, wherein the dedicated operational instruction is provided by the trusted execution environment ([0032]). With respect to claim 8, Gu discloses the media of claim 1, where the trusted execution environment includes a portion of memory on a graphics processing unit on the client device ([0079]). With respect to claim 9. Gu discloses a method of using an encrypted computational graph comprising: receiving, at a client device, an input for a machine-learning model that is represented as the encrypted computational graph within a machine-learning runtime environment on the client device ([0003] and [0032], neural flow is encrypted and exchanged between user or tenant and a computing system); communicating an operational instruction associated with a first node of the encrypted computational graph to one or more trusted execution environments on the client device ([0006], [0026], and Abstract, trained computer model is deployed to computing platform, and neural flow attestation engine verifies executing integrity of deployed model based on runtime neural flow); receiving an indication from a trusted execution environment indicating that the trusted execution environment is able to process the operational instruction ([0049], user receives and maintains a measurement database to verify whether neural flows deviate (broken trust) from training model stored); communicating the input and encrypted machine-learning content associated with the first node to the trusted execution environment ([0032], attested neural flow (model) is exchanged with user or tenant); and receiving, from the trusted execution environment, an output generated by executing computations instructed by the encrypted machine-learning content on the input ([0032], attested neural flow (model) is exchanged with user or tenant). With respect to claim 10, Gu discloses the method of claim 9, wherein the operational instruction is dedicated to the trusted execution environment ([0060]). With respect to claim 11, Gu discloses the method of claim 9, wherein the encrypted machine-learning content is encrypted using a public key provided by the trusted execution environment ([0032]). With respect to claim 12, Gu discloses the method of claim 9, wherein the encrypted machine-learning content includes learned weights for a large language model ([0050]). With respect to claim(s) 13-15 and 21-25, the method of claim(s) 13-15 and 21-25 does/do not limit or further define over the media and method of claim(s) 1-11. The limitations of claim(s) 13-15 and 21-25 is/are essentially similar to the limitations of claim(s) 1-11. Therefore, claim(s) 13-15 and 21-25 is/are rejected for the same reasons as claim(s) 1-11. Please see rejection above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ESTHER B. HENDERSON whose telephone number is (571)270-3807. The examiner can normally be reached Monday-Friday 6a-2p ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ESTHER B. HENDERSON/Primary Examiner, Art Unit 2458 January 22, 2026
Read full office action

Prosecution Timeline

May 10, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §102, §112
Feb 17, 2026
Interview Requested
Mar 18, 2026
Interview Requested
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596836
COMPLIANCE DATA STANDARDIZATION AND AUDIT OUTPUT
2y 5m to grant Granted Apr 07, 2026
Patent 12591631
VISIBILITY APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12579078
SPECULATING OBJECT-GRANULAR KEY IDENTIFIERS FOR MEMORY SAFETY
2y 5m to grant Granted Mar 17, 2026
Patent 12580779
INDUSTRIAL AUTOMATION MANUFACTURING WITH NFTs AND SMART CONTRACTS
2y 5m to grant Granted Mar 17, 2026
Patent 12562900
SECURE SECRETS MANAGEMENT IN AN INTEGRATION PLATFORM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+23.5%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 677 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month