Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 21-40 are pending per 10/21/2025. Claims 28-33 is/are withdrawn from consideration as being directed to non-elected embodiment. Applicant has elected (I) with traverse, namely Claims 21-27, 34-40.
The arguments presented in the Remark has been considered carefully but are not persuasive.
Applicant essentially argues that there should be no burden of search for (I) and (II) because both the transmitter and the receiver ends employ a structure of neural network where encoder network and decoder network of neural network shares at least one part. Thus Applicant alleges that searching for such a neural network structure for both (I) and (II) does not constitute a burden of search.
The examiner respectfully disagrees. A common shared feature does not necessarily render two differently structure devices as a single invention.
Even with both (I) and (II) use a structure of neural network where encoder network and decoder network of neural network shares at least one part, however the process at each end is not the same when integrating such a neural network into an encoder’s function and a decoder’s function (which differ in both operation and effect). An encoder neural network produces a signal that is very different from that of a decoder neural network (i.e. encoded signal vs. decoded signal). The shared neural network structure is obviously used differently to achieve different results, as such creating a divergence in mode of operation, design, and effect. This finding shows a burden of search because the searches has to consider the complications when using a same tool differently to achieve opposite results at encoder and decoder.
Furthermore, (I) and (II) can function without the specific particulars of each other, they are distinct even if being intended to work together, i.e. a decoder in (II) can function with any received signals transmitted by any transmitter that encodes the input onto a channel, and the encoder in (I) does not consider who/what at the receiving end of the transmitted signal.
Furthermore, an auto-encoder where encoder uses at least a part of the decoder neural network, is used in other applications, i.e. not exclusive to a transceiver engaging in communication, thus compounding the burdens of search. It is common for encoder/decoder to be symmetrical, i.e. same numbers of neurons/layers.
In view of the foregoing, the arguments are not persuasive.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 21-27, 34-40 is/are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over O’Shea et al. (US 2018/036192).
As to claim 21:
O’Shea discloses:
A method, applied to a transmit end, the method comprising:
obtaining, by the transmit end, a first data stream; processing, by the transmit end, the first data stream using an encoding neural network, to obtain a first symbol stream, (Fig. 1, ¶0047, receiving by transmitter 102 input information 108, i.e. first data stream, and process them into a series of signals. See also ¶0050-0051, 0122 )
wherein the encoding neural network reuses a part or all of a neural network that is the same as a decoding neural network that corresponds to the first data stream; (See at least ¶0073, encoder and decoder might share the same layer or collection of layers per Fig. 2. As such “Parameters and weight values in the network may be used for a single multiplication, as in a fully connected neural network (DNN), or they may be “tied” or replicated across multiple locations within the network”). See also ¶0123, 0161, encoder and decoder also share functions as well as training algorithm results by virtual of being trained together)
and outputting, by the transmit end, the first symbol stream. (See at least ¶0047, 0122, where the transmitter outputting the processed input signals for transmission)
As to claim 22:
O’Shea discloses all limitations of claim 21, further comprising: after outputting, by the transmit end, the first symbol stream, receiving, by the transmit end, a first weight, wherein the first weight is from the decoding neural network, and the first weight is used to train the encoding neural network. (¶0106, 109-0110, 0113, the transmission and eventual reconstruction of the input information might be a part of a close-loop training, wherein a loss function is determined from the decoder network’s output, which provides a weight update to the encoder network, i.e. training in an iterative manner)
As to claim 23:
O’Shea discloses all limitations of claim 21, further comprising: after outputting, by the transmit end, the first symbol stream, receiving, by the transmit end, a first gradient, wherein the first gradient is from the decoding neural network, and the first gradient is used to train the encoding neural network. (¶0106, 109-0110, 0113, in similar manner to claim 22, wherein a feedback is determined from the decoder network’s output, which provides gradient of the objective function, which is used to further update the encoder during training via calculating rate of change to select variations for the encoder’ parameter(s))
As to claim 24:
O’Shea discloses all limitations of claim 21, further comprising:
after outputting, by the transmit end, the first symbol stream, receiving, by the transmit end, a second function, wherein the second function is from the decoding neural network, and the second function is a loss function or a reward function; and processing, by the transmit end, the second function using the encoding neural network, to obtain a second gradient, wherein the second gradient is used to train the encoding neural network. (¶0106, 109-0113, the transmission and eventual reconstruction of the input information might be a part of a close-loop training, wherein a loss/objective function is determined from the decoder network’s output, which is used to deprive a gradient-based update by calculating a gradient of the objective function. The gradient is used to update the encoder by the variation in weights as an example)
As to claim 25:
O’Shea discloses all limitations of claim 21, wherein outputting, by the transmit end, the first symbol stream comprises: performing, by the transmit end, filtering processing on the first symbol stream, to obtain a first waveform signal, wherein an out-of-band signal is filtered out from the first waveform signal; and outputting, by the transmit end, the first waveform signal. , (Fig. 1, ¶0047, receiving by transmitter 102 input information 108, i.e. first data stream, and process them into a series of signals. See also ¶0134-0135, 0122, a filter is included in the process to filter unwanted signal parts before transmission. Whether low-pass (which is part of a transmitter) or high-pass, the filter filters out the low/high signal portions before transmission )
As to claim 26:
O’Shea discloses all limitations of claim 21, wherein processing, by the transmit end, the first data stream using the encoding neural network, to obtain the first symbol stream comprises: performing, by the transmit end, encoding processing on the first data stream, to obtain a first channel encoding code word; and processing, by the transmit end, the first channel encoding code word using the encoding neural network, to obtain the first symbol stream. (See ¶0049, 0063, 0037, 0122-0123, the transmitter including an encoding network which, naturally, perform encoding of input data stream as part of the transmission process over the channel(s). ¶0129, the system in training determine an error rate for codework in order to update the corresponding parameter, i.e. codeword, thus involving determining a codeword).
As to claim 27:
O’Shea discloses all limitations of claim 21, wherein the encoding neural network reusing a part or all of the neural network that is the same as the decoding neural network that corresponds to the first data stream comprises:
reusing, by the encoding neural network, a part or all of a model of the decoding neural network that corresponds to the first data stream,
a loss function of the decoding neural network that corresponds to the first data stream,
a reward function of the decoding neural network that corresponds to the first data stream,
or
a parameter of the decoding neural network that corresponds to the first data stream.
(See at least ¶0073, encoder and decoder might share the same layer or collection of layers per Fig. 2. As such “Parameters and weight values in the network may be used for a single multiplication, as in a fully connected neural network (DNN), or they may be “tied” or replicated across multiple locations within the network”). See also ¶0123, 0161, encoder and decoder also share functions as well as training algorithm results by virtual of being trained together. ¶0137, 0138 a loss function is determined and is shared to provide updates to both encoder and decoder)
As to claim 34:
O’Shea discloses:
A communication apparatus, comprising:
a processor; and a transceiver connected to the processor; wherein the processor is configured to execute program code stored in a memory, (See ¶0009, 0179, communication device with memory/processor with transceivers coupled thereto) and when the program code is executed, the apparatus is enabled to: obtain a first data stream; process the first data stream using an encoding neural network, to obtain a first symbol stream, (Fig. 1, ¶0047, receiving by transmitter 102 input information 108, i.e. first data stream, and process them into a series of signals. See also ¶0050-0051, 0122 )
wherein the encoding neural network reuses a part or all of as neural network that is the same as a decoding neural network that corresponds to the first data stream; (See at least ¶0073, encoder and decoder might share the same layer or collection of layers per Fig. 2. As such “Parameters and weight values in the network may be used for a single multiplication, as in a fully connected neural network (DNN), or they may be “tied” or replicated across multiple locations within the network”). See also ¶0123, 0161, encoder and decoder also share functions as well as training algorithm results by virtual of being trained together)
and output the first symbol stream. (See at least ¶0047, 0122, where the transmitter outputting the processed input signals for transmission)
As to claim 35:
O’Shea discloses all limitations of claim 34, wherein when the program code is executed, the apparatus is further enabled to: receive a first weight, wherein the first weight is from the decoding neural network, and the first weight is used to train the encoding neural network. (¶0106, 109-0110, 0113, the transmission and eventual reconstruction of the input information might be a part of a close-loop training, wherein a loss function is determined from the decoder network’s output, which provides a weight update to the encoder network, i.e. training in an iterative manner)
As to claim 36:
O’Shea discloses all limitations of claim, wherein when the program code is executed, the apparatus is further enabled to: receive a first gradient, wherein the first gradient is from the decoding neural network, and the first gradient is used to train the encoding neural network. (¶0106, 109-0110, 0113, in similar manner to claim 22, wherein a feedback is determined from the decoder network’s output, which provides gradient of the objective function, which is used to further update the encoder during training via calculating rate of change to select variations for the encoder’ parameter(s))
As to claim 37:
O’Shea discloses all limitations of claim 34, wherein when the program code is executed, the apparatus is further enabled to:
receive a second function, wherein the second function is from the decoding neural network, and the second function is a loss function or a reward function; and process the second function using the encoding neural network, to obtain a second gradient, wherein the second gradient is used to train the encoding neural network. (¶0106, 109-0113, the transmission and eventual reconstruction of the input information might be a part of a close-loop training, wherein a loss/objective function is determined from the decoder network’s output, which is used to deprive a gradient-based update by calculating a gradient of the objective function. The gradient is used to update the encoder by the variation in weights as an example)
As to claim 38:
O’Shea discloses all limitations of claim 34, wherein when the program code is executed, the apparatus is enabled to:
perform filtering processing on the first symbol stream, to obtain a first waveform signal, wherein an out-of-band signal is filtered out from the first waveform signal; and output the first waveform signal. (Fig. 1, ¶0047, receiving by transmitter 102 input information 108, i.e. first data stream, and process them into a series of signals. See also ¶0134-0135, 0122, a filter is included in the process to filter unwanted signal parts before transmission. Whether low-pass (which is part of a transmitter) or high-pass, the filter filters out the low/high signal portions before transmission )
As to claim 39:
O’Shea discloses all limitations of claim 34, wherein when the program code is executed, the apparatus is enabled to: perform encoding processing on the first data stream, to obtain a first channel encoding code word; and process the first channel encoding code word using the encoding neural network, to obtain the first symbol stream. (See ¶0049, 0063, 0037, 0122-0123, the transmitter including an encoding network which, naturally, perform encoding of input data stream as part of the transmission process over the channel(s). ¶0129, the system in training determine an error rate for codework in order to update the corresponding parameter, i.e. codeword, thus involving determining a codeword).
As to claim 40:
O’Shea discloses all limitations of claim 34, wherein when the program code is executed, the apparatus is enabled to: reuse a part or all of a model of the decoding neural network, a loss function of the decoding neural network, a reward function of the decoding neural network, or a parameter of the decoding neural network.
(See at least ¶0073, encoder and decoder might share the same layer or collection of layers per Fig. 2. As such “Parameters and weight values in the network may be used for a single multiplication, as in a fully connected neural network (DNN), or they may be “tied” or replicated across multiple locations within the network”). See also ¶0123, 0161, encoder and decoder also share functions as well as training algorithm results by virtual of being trained together. ¶0137, 0138 a loss function is determined and is shared to provide updates to both encoder and decoder)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Park et al. (US 2020/0034948) - U-Net based pix architecture for transforming LR M images to sCT images. The input is encoded sequentially as a feature map of reduced spatial dimension and increased depth as it travels through the encoder layers on the left side of the network. The process is reversed as the decoder layers recover spatial information and reconstruct the output CT image. Skip connections between corresponding encoder/decoder layers, represented as grey lines at bottom here, allow shared structural features to move across the network efficiently.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUAN M HUA whose telephone number is (571)270-7232. The examiner can normally be reached 10:30-6:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anthony Addy can be reached at 571-272-7795. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUAN M HUA/Primary Examiner, Art Unit 2645