Prosecution Insights
Last updated: April 19, 2026
Application No. 18/368,209

MACHINE LEARNING MODEL PROTECTION

Non-Final OA §102§103
Filed
Sep 14, 2023
Examiner
WILLIAMS, JEFFERY A
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Irdeto B V
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
768 granted / 920 resolved
+25.5% vs TC avg
Moderate +9% lift
Without
With
+9.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
47 currently pending
Career history
967
Total Applications
across all art units

Statute-Specific Performance

§101
8.0%
-32.0% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 920 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 5, 6, 10-12, 16, 19, 20, and 23-26 is/are rejected under 35 U.S.C. 102 (a)(1) as being FILLIN "Insert either—clearly anticipated—or—anticipated—with an explanation at the end of the paragraph." \d "[ 3 ]" anticipated by FILLIN "Insert the prior art relied upon." \d "[ 4 ]" Kim et al. (Kim) (US 2020/0184070) . Regarding claim s 1 and 1 6 , Kim discloses a system comprising one or more hardware processors ([0005], [0054], the system is embodied as hardware) , the one or more hardware processors arranged to carry out a machine learning model protection method ([0004], a binary code is inserted for protecting a machine learning model) , the machine learning model protection method comprising: generating, based on a set of parameters that define a machine learning model, an item of software which, when executed by one or more processors, provides an implementation for the machine learning model ([0025], [0026], [0036], information for generating a machine language model is generated by development component 210 and used by program analysis component 220 for generating a machine language model for execution ) ; and applying one or more software protection techniques to the item of software ([0024], [0035], [0038], [0047], [0050], [0053], a binary code is inserted for protecting the generated model is inserted into the model) . Regarding claim s 5 and 1 9 , Kim discloses the machine learning model is representable, at least in part, as a plurality of nodes, each node having corresponding node functionality ([0029], [0030], the model is represented as a set of nodes which represents a program statement (function)) ; and the item of software comprises a plurality of node functions, wherein each node function, when executed by the one or more processors, provides an implementation of the node functionality of a respective subset of the plurality of nodes ([0029], [0030], the model is represented as a set of nodes which represents a program statement (function)) . Regarding claim s 6 and 20 , Kim discloses wherein the respective subset of the plurality of nodes is a single node of the plurality of nodes ([0029], program statements are embodied as single nodes). Regarding claim s 10 and 23 , Kim discloses wherein the set of parameters are data interpretable by a machine learning framework software application (220) ([0025], [0026], [0036], information for generating a machine language model is generated by development component 210 and used by program analysis component 220 for generating a machine language model for execution ; [0054], the system is embodied as software and/or hardware ) to perform the machine learning model ([0025], [0026], [0036], information for generating a machine language model is used by development component 210 for generating a machine language model for execution). Regarding claim s 11 and 24 , Kim discloses wherein the set of parameters specify one or more of: (b) some or all of the structure of the machine learning model ( [0025] The development component 210 is configured to implement a development stage that designs the ML program for generating a ML model. During the development stage, the source code of the ML program can be annotated to generate an ML program annotation that indicates which part needs confidentiality protection ). Regarding claim s 12 and 25 , Kim discloses wherein generating the item of software comprises including, as part of the item of software, instructions which, when executed by the one or more processors, provide one or more security features in combination with the implementation for the machine learning model ([0025] The development component 210 is configured to implement a development stage that designs the ML program for generating a ML model. During the development stage, the source code of the ML program can be annotated to generate an ML program annotation that indicates which part needs confidentiality protection). Regarding claim 2 6 , the limitations of claim 26 are rejected in the analysis of claim 16. Kim further discloses a non-transitory computer readable medium storing a computer program which, when executed by one or more hardware processors, causes the one or more hardware processors to carry out a machine learning model protection method ([0055], a stored program is executed). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claim (s) 2, 3, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 4 ]" Kim et al. (Kim) (US 2020/0184070) in view of Lin et al. (Lin) (US 2016/0328647). Regarding claim s 2, 3, 1 7, and 18 , Kim discloses t he system of claim 16 (See claim 16 above). Kim is silent about wherein the item of software implements arithmetic operations as fixed-point operations ; and wherein the machine learning model protection method comprises one or both of: (a) obtaining a user-defined precision for the fixed-point operations for use in said generating the item of software; and (b) obtaining a user-defined specification for a number of bits for representing an input to and/or an output of the arithmetic operations. Lin from the same or similar field of endeavor discloses wherein the item of software implements arithmetic operations as fixed-point operations ([0010], [0025], the model is configured as affixed point implementation) ; and wherein the machine learning model protection method comprises: (b) obtaining a user-defined specification for a number of bits for representing an input to and/or an output of the arithmetic operations ([0010], [0028], [0031], [0049], [0052], the bit width for parameters and calculations for a neural network is adjusted based on user input) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Lin into the teachings of Kim to reduce a model size, reduce processing time, reduce memory bandwidth, and/or reduce power consumption (Kim: [0025]) . Claim (s) 7, 9, 21, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 4 ]" Kim et al. (Kim) (US 2020/0184070) in view of applicant admitted prior art (AAPA) . Regarding clai ms 7, 9, 21, and 22 , Kim discloses t he system of claim 19 (see claim 19 above) and t he system of claim 1 6 (see claim 1 6 above) . Kim is silent about wherein the machine learning model is one of:(a) a model for a neural network and each of the plurality of nodes is a respective neuron of the neural network.; (b) a model for a decision tree and each of the plurality of nodes is a respective node of the decision tree; (c) a model for a random forest and each of the plurality of nodes is a respective node of the random forest ; and wherein the machine learning model is a support vector machine. The applicant admits as prior art (AAPA) wherein the machine learning model is one of: (a) a model for a neural network and each of the plurality of nodes is a respective neuron of the neural network (pg. 1, para. 2, many types of machine language models include neural networks, decision trees, support vector machines, and random forests) ; (b) a model for a decision tree and each of the plurality of nodes is a respective node of the decision tree (pg. 1, para. 2, many types of machine language models include neural networks, decision trees, support vector machines, and random forests); (c) a model for a random forest and each of the plurality of nodes is a respective node of the random forest ( pg. 1, para. 2, many types of machine language models include neural networks, decision trees, support vector machines, and random forests); and wherein the machine learning model is a support vector machine (pg. 1, para. 2, many types of machine language models include neural networks, decision trees, support vector machines, and random forests). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of AAPA into the teachings of Kim dependent upon the choice of system design for implementing common machine language model types. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Srinivasan et al. (Srinivasan) (US 2021/0133577) ([0002], machine learning models are encrypted). Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT JEFFERY A WILLIAMS whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7579 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 8:00-5:00 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Sath Perungavoor can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-7455 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEFFERY A WILLIAMS/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Sep 14, 2023
Application Filed
Mar 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603981
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12596182
OPTICAL SYSTEM FOR LIGHT DETECTION AND RANGING
2y 5m to grant Granted Apr 07, 2026
Patent 12581040
SYSTEMS AND METHODS FOR COORDINATED COLLECTION OF STREET-LEVEL IMAGE DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12581049
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12556824
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
92%
With Interview (+9.0%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 920 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month