Prosecution Insights
Last updated: April 19, 2026
Application No. 17/911,362

SYSTEM AND METHOD FOR ADAPTING TO CHANGING CONSTRAINTS

Final Rejection §102§103§DP
Filed
Sep 13, 2022
Examiner
TANK, ANDREW L
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Interdigital Ce Patent Holdings
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
366 granted / 538 resolved
+13.0% vs TC avg
Strong +31% interview lift
Without
With
+31.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
43 currently pending
Career history
581
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
28.6%
-11.4% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 538 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following action is in response to the amendment and remarks of 10/09/2025. By the amendment, claims 30, 37-40 and 47-49 have been amended. Claims 50-51 have been newly added. Claims 30-51 are pending and have been considered below. Response to Arguments The double patenting rejection of claims 30, 32-33, 37-40, 42-43 and 47-49 over claims 30, 31-32, 34, 36 and 39 of co-pending application No. 17/911,866 (Non-Final Rejection 07/22/2025 pages 2-6) has been withdrawn in light of approved Terminal Disclaimer (Terminal Disclaimer 10/09/2025) and Applicant’s corresponding remarks (Remarks page 8). The 35 USC 102 rejection of claims 30-36, 38-46 and 48-49 by LIN (Non-Final Rejection 07/22/2025 pages 2-6) has been maintained in light of the amendment and Applicant’s corresponding remarks (Remarks pages 8-9). Particularly, Applicant argues that LIN fails to teach or suggest “while processing the first portion of the input data sequence by the first neural network, receiving a second indication of a second constraint” and “processing a second portion of the input data sequence by the second neural network based on the second constraint” as now required by the amended claims. The Examiner respectfully disagrees. LIN discloses receiving continuous portions of an input data sequence for processing by neural networks (¶32, ¶34), wherein the neural network configurations change based on changing constraints while receiving the portions of input data (¶49-55, Fig. 4). Accordingly, LIN does disclose while processing a first portion of the input data by a first neural network, receiving a second indication of a second constraint and processing a second portion of the input data by a second neural network based on the second constraint as recited in the amended claims. The argument is not persuasive. Applicant further argues (Remarks page 9) that LIN fails to disclose features of newly added claims 50-51. The Examiner respectfully disagrees as shown in the rejection of new claims 50-51 below. The 35 USC 103 rejection of claims 37 and 47 over LIN in view of THAKKER (Non-Final Rejection 07/22/2025 pages 2-6) has been maintained in light of the amendment and Applicant’s corresponding remarks (Remarks pages 9-10). Particularly, Applicant argues that THAKKER fails to meet the deficiencies of LIN as discussed regarding amended claims 30 and 40. The Examiner notes that the only deficiency of LIN is that the RNN-models are skip-RNN models. This deficiency is met by THAKKER as discussed in the corresponding 35 USC 103 rejections. The argument is not persuasive. Applicant similarly further argues (Remarks page 10) that THAKKER fails to disclose features of newly added claims 50-51. The Examiner notes that LIN discloses these features as presented in the rejection of claims 50-51 below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 30-36, 38-46 and 48-51 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by LIN, US 2016/0328644 A1, previously presented. Regarding claim 30, LIN discloses a method performed by a wireless transmit/receive unit (WTRU) (¶27), the method comprising: receiving an input data sequence (¶32: input stream, ¶34: recurrent neural networks process input data sequence, ¶48: input data may include audio/video/sensor data); receiving a first indication of a first constraint for processing a first portion of the input data sequence at a first time by a first neural network (¶34: input data portion sequence, ¶49-50: artificial neural network for processing input data, ¶51-53: determine constraint factors such as resource availability for the preconverted ANN, Fig. 4), wherein the first indication indicates a relationship between the first constraint and a characteristic of the first neural network for processing the first portion of the input data sequence (¶51-52: changing configuration of ANN based on performance constraints, ¶74-75: configuration characteristics include model sparsity based on performance objectives); while processing the first portion of the input data sequence by the first neural network, receiving a second indication of a second constraint corresponding to a change in the first constraint for processing a second portion of the input data sequence at a second time by a second neural network (¶34, ¶49-50, ¶51-53: converted ANN, a second NN, is converted in response to change in first constraint to continue processing input data portion sequences, Fig. 4), wherein the second indication indicates a relationship between the second constraint and a characteristic of the second neural network for processing the second portion of the input data sequence (¶51-52, ¶74-75: converted ANN having second set of characteristics reflects changed constraints); and processing the second portion of the input data sequence by the second neural network based on the second constraint (¶51-55: dynamically select appropriate ANN configuration for processing input data portion sequences, Fig. 4). Regarding claim 31, LIN discloses the method of claim 30, wherein: the characteristic of the first neural network comprises at least one of a first computation cost or a first accuracy associated with the first neural network (¶51-52, ¶74-75); and the characteristic of the second neural network comprises at least one of a second computation cost or a second accuracy associated with the second neural network (¶51-52, ¶74-75). Regarding claim 32, LIN discloses the method of claim 30, wherein the first constraint comprises at least one of a computational resource availability or a data processing accuracy (¶51-52). Regarding claim 33, LIN discloses the method of claim 30, wherein the first neural network has a greater computational load than the second neural network, and wherein the first indication indicates a greater computational resource availability than the second indication (¶51-52). Regarding claim 34, LIN discloses the method of claim 33, further comprising: transmitting, to a device other than the WTRU (¶99), at least one value indicating a difference in accuracy associated with the first neural network and the second neural network (¶53, ¶56); or transmitting, to the device other than the WTRU (¶99), an expected delay associated with switching between using the first neural network and the second neural network for processing the input data sequence. Regarding claim 35, LIN discloses the method of claim 34, wherein the first neural network and the second neural network are included in a family of neural networks comprising at least one additional neural network, wherein each neural network in the family of neural networks is associated with a different computational load and a different accuracy (¶57, ¶66), and wherein at least one of the computational load or accuracy associated with each neural network in the family of neural networks is transmitted to the device other than the WTRU (¶57, ¶66, ¶99). Regarding claim 36, LIN discloses the method of claim 35, wherein the family of neural networks is communicated in a package indicating available neural networks at the WTRU for processing the input data sequence (¶57), and wherein the package includes metadata that indicates the at least one of the computational load or the accuracy associated with each neural network in the family of neural networks (¶57). Regarding claim 38, LIN discloses the method of claim 33, wherein the second neural network is adapted from the first neural network to enable processing of the second portion of the input data sequence with a lower computational load (¶51-53), and wherein the second neural network is configured to minimize a loss in accuracy from the first neural network (¶70-71, ¶74-75). Regarding claim 39, LIN discloses the method of claim 30, wherein the input data sequence comprises video data or audio data (¶48), and wherein the processing of the first portion or the processing of the second portion is performed using an encoder or a decoder on the WTRU (¶27). Regarding claims 40-46 and 48-49, claims 40-46 and 48-49 recite limitations similar to claims 30-36 and 38-39, respectively, and are similarly rejected. Regarding claim 50, LIN discloses the method of claim 30, further comprising, prior to processing the second portion of the input data sequence by the second neural network based on the second constraint: retrieving an internal state of the first neural network (¶64); and initializing the second neural network with the internal state of the first neural network (¶64: based on current neural network, adjust hyper-parameters for initializing weights of adjusted neural network). Regarding claim 51, claim 51 recites limitations similar to claim 50 and is similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 37 and 47 are rejected under 35 U.S.C. 103 as being unpatentable over LIN in view of THAKKER, US 2021/0056422 A1 effective filing of 08/23/2019, previously presented. Regarding claim 37, LIN discloses the method of claim 33, wherein the first neural network comprises a first recurrent neural network (RNN) model, wherein the second neural network comprises a second RNN model (¶34), wherein the second RNN model has a lower computational load when processing the second portion of the input data sequence than a computational load of the first RNN model when processing the first portion of the input data sequence (¶51-55). LIN fails to disclose wherein the first and second RNN models are skip-RNN models. THAKKER discloses methods for improving RNN models processing sequential data (¶3-4, ¶22). In particular, THAKKER discloses improving an RNN model with skip-RNN model functionality (¶22). Therefore it would have been obvious to one having ordinary skill in the art and the teachings of LIN and THAKKER before them before the effective filing of the claimed invention to combine the skip-RNN model teachings of THAKKER with the RNN model of LIN, yielding the predictable result of the first and second RNN models of LIN and THAKKER being skip-RNN models. One would have been motivated to make this combination in order to provide greater functionality and performance improvements when executing an RNN on sequential data, as suggested by THAKKER (¶3-4, ¶22, ¶25). Regarding claim 47, claim 47 recites limitations similar to claim 37 and is similarly rejected. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW L TANK whose telephone number is (571)270-1692. The examiner can normally be reached Monday-Thursday 9a-6p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW L TANK/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Sep 13, 2022
Application Filed
Jul 17, 2025
Non-Final Rejection — §102, §103, §DP
Oct 09, 2025
Response Filed
Jan 21, 2026
Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585381
ADVANCED KEYBOARD BASED SEARCH
2y 5m to grant Granted Mar 24, 2026
Patent 12585730
MANAGING MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579479
COUNTERFACTUAL SAMPLES FOR MAINTAINING CONSISTENCY BETWEEN MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12566998
SYSTEM, METHODS, AND PROCESSES FOR MODEL PERFORMANCE AGGREGATION
2y 5m to grant Granted Mar 03, 2026
Patent 12555037
MODEL MANAGEMENT DEVICE AND MODEL MANAGING METHOD
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+31.2%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 538 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month