DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to applicant’s communication filed 23 December 2025, in response to the Office Action mailed 24 September 2025. The applicant’s remarks and any amendments to the claims or specification have been considered, with the results that follow.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 30-34, 36-44, and 46-51 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 30, 32-33, 37-40, 42-43, and 47-49 of copending Application No. 17/911362 (reference application) in view of Wang (US 2021/0158151).
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
As per claim 30, the claim is compared with claim 30 of Application No. 17/911362—where any differences between them have been highlighted (in bold)—as follows:
Instant Application
Application No. 17/911362
A method performed by a wireless transmit receive unit (WTRU)
A method performed by a wireless transmit/receive unit (WTRU)
the method comprising: receiving an input data sequence
the method comprising: receiving an input data sequence
receiving a first indication of a first constraint for processing a first portion of the input data sequence, wherein the first indication indicates a relationship between the first constraint and a neural network (NN) for processing the first portion of the input data sequence
receiving a first indication of a first constraint for processing a first portion of the input data sequence at a first time by a first neural network, wherein the first indication indicates a relationship between the first constraint and a characteristic of the first neural network for processing the first portion of the input data sequence
processing the first portion of the input data sequence at a first time utilizing the NN based on the first indication
processing the input data sequence utilizing one of the first neural network or the second neural network based on the first or second constraint
while continuing to receive the input data sequence, receiving a second indication of a second constraint for processing a second portion of the input data sequence, wherein the second indication indicates a relationship between the second constraint and the NN for processing the second portion of the input data sequence
while continuing to receive the input data sequence, receiving a second indication of a second constraint corresponding to a change in the first constraint for processing a second portion of the input data sequence at a second time by a second neural network, wherein the second indication indicates a relationship between the second constraint and a characteristic of the second neural network for processing the second portion of the input data sequence
adapting, based on the second indication, the NN to process the second portion of the input data sequence, wherein the NN is adapted to be modified according to one or more parameters of a function based on the second indication to process the second portion of the input data sequence
and processing the second of the input data sequence at a second time utilizing the adapted NN based on the second indication
processing the input data sequence utilizing one of the first neural network or the second neural network based on the first or second constraint
As illustrated above, claim 1 of Application ‘362 claims all of the limitations set forth in the instant application, except for and adapting, based on the second indication, the NN to process the second portion of the input data sequence, wherein the NN is adapted to be modified according to one or more parameters of a function based on the second indication to process the second portion of the input data sequence.
Wang teaches adapting, based on the second indication, the NN to process the second portion of the input data sequence, wherein the NN is adapted to be modified according to one or more parameters of a function based on the second indication to process the second portion of the input data sequence [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration to improve an overall efficiency of how resources are utilized (para. 0152, etc.) which can include changing different layer parameters, connections, sizes, etc. (paras. 0025, 0046, etc.); where the modified/updated ML configuration is the second indication of the second constraint].
Application ‘362 and Wang are analogous art, as they are within the same field of endeavor, namely adapting neural networks being utilized to process input data streams based upon resource constraints.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify/update the neural network to process further input using configuration data based upon resource constraints, etc., as taught by Wang, in the system for determining which neural network to process further input data based upon resource constraints in the system claimed by Application ‘362.
Wang provides motivation as [adapting the neural network allows the system to adapt to changing conditions while improving resource utilization and accuracy (paras. 0024-26, 0051, etc.)].
As per claim 31, see claim 32 of Application No. 17/911362.
As per claim 32, see claim 33 of Application No. 17/911362.
As per claim 33, see claim 37 of Application No. 17/911362.
As per claim 34, see claim 37 of Application No. 17/911362.
As per claim 36, see claim 38 of Application No. 17/911362.
As per claim 37, Application ‘362/Wang teaches receiving, from a device other than the WTRU, a target computational cost value or an accuracy value, wherein the NN is adapted to achieve the target computational cost or the accuracy value [a controller(s) in the device(s) can receive wirelessly transmitted ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (Wang: paras. 0054, 0141-143, 0148, etc.), and can also include a desired improved accuracy (Wang: paras. 0051, 0055, 0075, 0087, etc.)].
As per claim 38, Application ‘362/Wang teaches receiving, from a device other than the WTRU, a command to increase or decrease the computational load of the NN by a defined amount; and adapting, based on the command, the NN to process a third portion of the input data sequence [a controller(s) in the device(s) can receive wirelessly transmitted ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (Wang: paras. 0054, 0141-143, 0148, etc.), can also include a desired improved accuracy (Wang: paras. 0051, 0055, 0075, 0087, etc.), and where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration (para. 0152, etc.)].
As per claim 39, see claim 39 of Application No. 17/911362.
As per claim 40, see the rejection of claim 30, above and claim 40 of Application No. 17/911362.
As per claim 41, see claim 47 of Application No. 17/911362.
As per claim 42, see claim 42 of Application No. 17/911362.
As per claim 43, see claim 43 of Application No. 17/911362.
As per claim 44, see claim 47 of Application No. 17/911362.
As per claim 46, see claim 48 of Application No. 17/911362.
As per claim 47, see the rejection of claim 37, above, wherein Application ‘362/Wang also teaches a transceiver, and wherein the processor is further configured to: receive, via the transceiver [communication of the configuration data between devices can occur via included transceivers (Wang: fig. 2; paras. 0036, 0040; etc.)].
As per claim 48, see the rejection of claim 38, above, wherein Application ‘362/Wang also teaches a transceiver, and wherein the processor is further configured to: receive, via the transceiver [communication of the configuration data between devices can occur via included transceivers (Wang: fig. 2; paras. 0036, 0040; etc.)].
As per claim 49, see claim 49 of Application No. 17/911362.
As per claim 50, Application ‘362/Wang teaches wherein the WTRU comprises a scheduler, and wherein the second indication of the second constraint for processing the second portion of the input data sequence is received from the scheduler [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (Wang: paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration to improve an overall efficiency of how resources are utilized (Wang: para. 0152, etc.) which can include changing different layer parameters, connections, sizes, etc. (Wang: paras. 0025, 0046, etc.), and scheduling parameters (Wang: paras. 0041, 0103, etc.); where the modified/updated ML configuration data including scheduling parameters is the second indication of the second constraint for processing a second portion of the input data sequence, and including scheduling parameters means the controller is acting as a scheduler].
As per claim 51, see the rejection of claim 50, above.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 30-32, 36-40, 42, 43, and 46-51 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang (US 2021/0158151).
As per claim 30, Wang teaches a method performed by a wireless transmit receive unit (WTRU) [a wireless transmit/receive unit, which can include user equipment (UE) implementing machine learning (ML) architecture(s) (figs. 2, 5, etc.)], the method comprising: receiving an input data sequence [the wireless unit receives input data for the ML architecture (fig. 5, etc.)]; receiving a first indication of a first constraint for processing a first portion of the input data sequence, wherein the first indication indicates a relationship between the first constraint and a neural network (NN) for processing the first portion of the input data sequence [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.), for one or more types of neural network (para. 0038, etc.); where the configuration data is a first indication of a first constraint which indicates the relationship between the resource (e.g., processing, memory, power, etc.) constraint and the neural network for processing a first portion of the input data sequence]; processing the first portion of the input data sequence at a first time utilizing the NN based on the first indication [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.)]; while continuing to receive the input data sequence, receiving a second indication of a second constraint for processing a second portion of the input data sequence, wherein the second indication indicates a relationship between the second constraint and the NN for processing the second portion of the input data sequence [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration (para. 0152, etc.); where the modified/updated ML configuration is the second indication of the second constraint for processing a second portion of the input data sequence]; adapting, based on the second indication, the NN to process the second portion of the input data sequence, wherein the NN is adapted to be modified according to one or more parameters of a function based on the second indication to process the second portion of the input data sequence [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration to improve an overall efficiency of how resources are utilized (para. 0152, etc.) which can include changing different layer parameters, connections, sizes, etc. (paras. 0025, 0046, etc.); where the modified/updated ML configuration is the second indication of the second constraint]; and processing the second of the input data sequence at a second time utilizing the adapted NN based on the second indication [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration (para. 0152, etc.), where the neural network can process streaming multimedia data (paras. 0001, 0033, etc.); and where the modified/updated ML configuration processes the continuing input data sequence].
As per claim 31, Wang teaches wherein the first constraint comprises at least one of a computational resource availability or a data processing accuracy [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.), and can also include a desired improved accuracy (paras. 0051, 0055, 0075, 0087, etc.)].
As per claim 32, Wang teaches wherein the NN has a computational load, wherein the computational load is greater before being adapted than after being adapted, and wherein the first indication indicates a greater computational resource availability than the second indication [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration to improve an overall efficiency of how resources are utilized based upon reduced availability (para. 0152, 0172, etc.)].
As per claim 36, Wang teaches wherein the NN is adapted to enable processing of the second portion of the input data sequence with a lower computational load, and wherein the NN is configured to minimize a loss in accuracy after the adaptation [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration to improve an overall efficiency of how resources are utilized based upon reduced availability (para. 0152, 0172, etc.)].
As per claim 37, Wang teaches receiving, from a device other than the WTRU, a target computational cost value or an accuracy value, wherein the NN is adapted to achieve the target computational cost or the accuracy value [a controller(s) in the device(s) can receive wirelessly transmitted ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.), and can also include a desired improved accuracy (paras. 0051, 0055, 0075, 0087, etc.)].
As per claim 38, Wang teaches receiving, from a device other than the WTRU, a command to increase or decrease the computational load of the NN by a defined amount; and adapting, based on the command, the NN to process a third portion of the input data sequence [a controller(s) in the device(s) can receive wirelessly transmitted ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.), can also include a desired improved accuracy (paras. 0051, 0055, 0075, 0087, etc.), and where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration (para. 0152, etc.)].
As per claim 39, Wang teaches wherein the input data sequence comprises video data or audio data, and wherein the processing is performed using an encoder or a decoder on the WTRU [the wireless device, including neural network, can process streaming multimedia data, including video and audio (paras. 0001, 0033, 0138, etc.) via included encoder and decoder (paras. 0044, 0065, etc.)].
As per claim 40, see the rejection of claim 30, above, wherein Wang also teaches a wireless transmit receive unit (WTRU) comprising a processor, the processor configured to: [perform the method] [the devices can include a processor executing instructions/code from computer-readable storage media (fig. 2, etc.)].
As per claim 42, see the rejection of claim 31, above.
As per claim 43, see the rejection of claim 32, above.
As per claim 46, see the rejection of claim 36, above.
As per claim 47, see the rejection of claim 37, above, wherein Wang also teaches a transceiver, and wherein the processor is further configured to: receive, via the transceiver [communication of the configuration data between devices can occur via included transceivers (fig. 2; paras. 0036, 0040; etc.)].
As per claim 48, see the rejection of claim 38, above, wherein Wang also teaches a transceiver, and wherein the processor is further configured to: receive, via the transceiver [communication of the configuration data between devices can occur via included transceivers (fig. 2; paras. 0036, 0040; etc.)].
As per claim 49, see the rejection of claim 39, above.
As per claim 50, Wang teaches wherein the WTRU comprises a scheduler, and wherein the second indication of the second constraint for processing the second portion of the input data sequence is received from the scheduler [a controller(s) in the device(s) can receive ML configuration data to configure a ML model for processing communication inputs (input data sequence), where the configuration data can include ML capabilities, processing power availability, memory constraints, power budget, etc. (paras. 0054, 0141-143, 0148, etc.) where the controller can dynamically reassess changing conditions (e.g., in the operating environment or devices) and modify or update the ML configuration to improve an overall efficiency of how resources are utilized (para. 0152, etc.) which can include changing different layer parameters, connections, sizes, etc. (paras. 0025, 0046, etc.), and scheduling parameters (paras. 0041, 0103, etc.); where the modified/updated ML configuration data including scheduling parameters is the second indication of the second constraint for processing a second portion of the input data sequence, and including scheduling parameters means the controller is acting as a scheduler].
As per claim 51, see the rejection of claim 50, above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 33, 34, 41, and 44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 2021/0158151) in view of Campos et al. (Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks, Feb 2018, pgs. 1-17 – an earlier version of which is cited in an IDS).
As per claim 33, Wang teaches the method of claim 32, as described above.
While Wang teaches adapting the NN for a second portion of an input data sequence (see above), it has not been relied upon for teaching wherein the adaptation of the NN causes the NN to skip more of the second portion of the input data sequence than the first portion of the input data sequence that was processed by the NN based on the first indication.
Campos wherein the adaptation of the NN causes the NN to skip more of the second portion of the input data sequence than the first portion of the input data sequence that was processed by the NN based on the first indication [a skip RNN skips input samples and associated state updates based upon a budget constraint (pgs. 1-2, abstract and sections 1-2; pg. 5, section 3.2; etc.)].
Wang and Campos are analogous art, as they are within the same field of endeavor, namely dynamically adapting a NN as it processes an input stream/sequence.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include skipping input samples by the NN based on a constraint, as taught by Campos, to the adaptation of the NN processing the samples based on received constraints in the system taught by Wang.
Campos provides motivation as [the skip RNN model reduces required processing while improving performance of the model (pg. 1, abstract; etc.)].
As per claim 34, Wang teaches the method of claim 32, as described above.
While Wang teaches adapting the NN for a second portion of an input data sequence, including adapting the NN to have a lower computational load when processing the second portion (see above), it has not been relied upon for teaching wherein the NN comprises a skip recurrent NN (RNN) model, wherein the skip RNN model has a lower computational load when processing the second portion of the input data sequence than when processing the first portion of the input data sequence.
Campos teaches wherein the NN comprises a skip recurrent NN (RNN) model, wherein the skip RNN model has a lower computational load when processing the second portion of the input data sequence than when processing the first portion of the input data sequence [a skip RNN skips input samples and associated state updates based upon a budget constraint, to reduce the processing load of the model while improving performance (pgs. 1-2, abstract and sections 1-2; pg. 5, section 3.2; etc.)].
Wang and Campos are analogous art, as they are within the same field of endeavor, namely dynamically adapting a NN as it processes an input stream/sequence.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include skipping input samples by the NN based on a constraint, as taught by Campos, to the adaptation of the NN processing the samples based on received constraints in the system taught by Wang.
Campos provides motivation as [the skip RNN model reduces required processing while improving performance of the model (pg. 1, abstract; etc.)].
As per claim 41, see the rejection of claim 33, above.
As per claim 44, see the rejection of claim 34, above.
Response to Arguments
Applicant's arguments filed 23 December 2025 have been fully considered but they are not persuasive.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., that the NN is adapted to process a portion of the same input data sequence without sending any modification communication to another device) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In this case, the claim recites “while continuing to receive the input data sequence, receiving a second indication of a second constraint for processing a second portion of the input data sequence, wherein the second indication indicates a relationship between the second constraint and the NN for processing the second portion of the input data sequence” (see, e.g., claim 30). Additionally, multiple dependent claims indicate that the WTRU is “receiving, from a device other than the WTRU,” commands, cost/accuracy values, etc. (see, e.g., claims 37-38).
Conclusion
The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): claims 1-29, 35, and 45 are canceled; claims 30-34, 36-44, and 46-51 are rejected.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lin (US 2016/0328644) – discloses an adaptive neural network selector/configuration including adapting the NN configuration based on resource or environment changes (e.g., changes in applications or devices) to meet desired latency requirements, etc.
Thakker (US 2021/0056422) – discloses a skip RNN including a skip predictor.
Srivastava et al. (Highway Networks, Nov 2015, pgs. 1-6) – discloses a neural network architecture that includes skipping layers/nodes during training via gating units which learn when to skip.
Zhang et al. (Trainable Dynamic Subsampling for End-to-End Speech Recognition, Sept 2019, pgs. 1413-1417) – discloses a skip RNN architecture including a skip predictor based upon different layers of the RNN, as well as dynamic subsampling to skip nodes/layers.
Tao et al. (Skipping RNN State Updates without Retraining the Original Model, Nov 2019, pgs. 31-36 – cited in an IDS) – discloses a skip RNN including training the skip predictor (independently of the RNN).
Cui et al. (Spatial Deep Learning for Wireless Scheduling, June 2019, pgs. 1248-1261) – discloses a system/method utilizing a neural network to schedule links in a wireless network.
The examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE GIROUX whose telephone number is (571)272-9769. The examiner can normally be reached M-F 10am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEORGE GIROUX/Primary Examiner, Art Unit 2128