DETAILED ACTION Th is action is in response to the claims filed 09/15/2023 for Application number 18 / 468 , 203 . Claims 1-30 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/21/2019 and 02/02/2022 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means,” and are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use “means for” performing the claimed function that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: means for partitioning … in claim 21 means for convolving … in claim 21 means for concatenating … in claim 21 means for taking … in claim 21 means for concatenating … in claim 25 means for discarding in claim 26 means for discarding in claim 27 means for partitioning in claim 28 means for convolving in claim 29 means for concatenating/convolving in claim 30 Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim s FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 1-30 are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" Hill et al. ("US 20200394500 A1", hereinafter "Hill") in view of Buckler et al. ("US 20200410352 A1", hereinafter "Buckler") and further in view of Zhou et al. ("US 20230385642 A1", hereinafter "Zhou") . Regarding claim 1 , Hill teaches A processing system, comprising: at least one memory having executable instructions stored thereon (¶0023 -¶0024; FIG. 1 ) ; and one or more processors communicatively coupled with the at least one memory and configured to execute the executable instructions in order to cause the processing system to (¶0023 -¶0024; FIG. 1 ) : partition a first input into a first set of channels and a second set of channels (“I f the input is first decomposed into multiple channels, such as the red, green, and blue channels of a color image, then the convolutional network trained on that input may be considered three-dimensional, with two spatial dimensions along the axes of the image and a third dimension capturing color information. ” [¶0042]) ; convolve, at a first layer of a neural network, the first set of channels and the second set of channels into a first output having a smaller dimensionality than a dimensionality of the first input (“ The outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map (e.g., 220) receiving input from a range of neurons in the previous layer (e.g., feature maps 218) and from each of the multiple channels … Values from adjacent neurons may be further pooled, which corresponds to down sampling, and may provide additional local invariance and dimensionality reduction. ” [¶0042]) ; concatenate the first set of channels and the first output into a second input for a second layer of the neural network (“ I n a fully connected neural network segment 202, a neuron in a first layer may communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer. FIG. 2B illustrates an example of a locally connected neural network segment 204. In a locally connected neural network segment 204, a neuron in a first layer may be connected to a limited number of neurons in the second layer. ” [¶0031]) ; convolve the second input into a second output via the second layer of the neural network (“ a second convolutional layer configured to use a second-layer kernel for convolving the second-layer input tensor to generate a third-layer input tensor, ” [¶0011]) , wherein: take one or more actions based on at least one of the first output and the second output. (“ The output of the deep convolutional network 350 is a classification 366 for the input data 352. The classification 366 may be a set of probabilities, where each probability is the probability of the input data including a feature from a set of classification features. ” [¶0048 ; classification corresponds to an action. ]) However fails to explicitly teach the second output merges a first receptive field generated by the first layer of the neural network with a second receptive field generated by the second layer of the neural network, and the first receptive field covers a larger receptive field in the first input than the second receptive field; Buckler teaches the second output merges a first receptive field generated by the first layer of the neural network with a second receptive field generated by the second layer of the neural network (“ The second receptive field 520 is offset from the first receptive field 510 such that the second receptive field 520 shares a set of pixels with the first receptive field 510 and the second receptive field 520 further includes an additional set of pixels ” [¶0076]) , It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Hill’s teachings by merging the first and second receptive fields as taught by Buckler. One would have been motivated to make this modification in order to reduce the amount of CNN computations thereby reducing power consumption and increasing computation speeds. [¶0050; Buckler] However Hill/Buckler fails to explicitly teach the first receptive field covers a larger receptive field in the first input than the second receptive field; Zhou teaches the first receptive field covers a larger receptive field in the first input than the second receptive field; (“ It is necessary to ensure that a receptive field of the convolutional layer equivalent to the linear operation is less than or equal to a receptive field of the first convolutional layer. ” [¶0015 ; receptive field of the convolutional layer is less than or equal to a receptive field of the first convolutional layer would imply the first receptive field covers a larger receptive field than the second receptive field. ]) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Hill’s/Buckler’s teachings in order to keep the second receptive field size smaller than the first receptive field as taught by Zhou. One would have been motivated to make this modification in order to avoid reducing speed of an inference phase or increasing resource consumption of the inference phase. [¶0015, Zhou] Regarding claim 2 , Hill/Buckler/Zhou teaches The processing system of Claim 1, where Hill teaches wherein the first set of channels and the second set of channels comprise equal-sized contiguous portions of the first input. (“ While in conventional convolutional processing, the input tensor is processed using receptive fields having the same dimensions as the kernel, the DCN 501 uses stretched receptive fields having a length x(k) and a width y( i ). In other words, the stretched receptive fields of a layer of the DCN 501 have the same length as the corresponding kernels and the same width as the corresponding input tensor. ” [¶0066]) Regarding claim 3 , Hill/Buckler/Zhou teaches The processing system of Claim 1, Hill teaches wherein the first output has a size corresponding to a size of the first set of channels or a size of the second set of channels. (“ Note that the convolution of the layer 503 may involve the use of padding (not shown) to generate an output for the layer 504 having the same dimensions as the input of layer 503 (rather than the smaller output that would result if no padding were used). ” [¶0070]) Regarding claim 4 , Hill/Buckler/Zhou teaches The processing system of Claim 1, Hill teaches wherein the second output has a size corresponding to a size of the first set of channels or a size of the second set of channels. (“ Note that using stride lengths greater than 1, having multiple channels in a layer (e.g., using multiple filters), or processing multi-dimensional tensors will change the corresponding number of elements that will need to be stored in the operational memory, however the overall methodology—performing convolutional and other operations in subsequent layers while still processing the current layer—will remain substantially the same. ” [¶00 63 ]) Regarding claim 5 , Hill/Buckler/Zhou teaches The processing system of Claim 1, Hill teaches wherein in order to concatenate the first set of channels and the first output into the second input, the one or more processors are configured to cause the processing system to concatenate a reference to the first set of channels and the first output. (“ In a subsequent stage (not shown), the memory spots used by no-longer needed element strips of the layer 503 will have been freed and reused in the same way as described above in reference to the layer 502. In further subsequent stages, the same will also apply to the layer 504, ” [¶0071]) Regarding claim 6 , Hill/Buckler/Zhou teaches The processing system of Claim 1, Hill teaches wherein the one or more processors are further configured to cause the processing system to discard at least a portion of the first input based at least in part on portions of the first input used in convolving the second input into the second output. (“ The operational-memory spots used by element strip 508(2) of the input tensor for the layer 502 may be freed and reused since those value will not be needed for any future convolutions. It should be noted that the above-referenced operational-memory spots, as well as any others, may be freed (“discarded”) as soon as the corresponding calculations for the previous stage are completed and before additional elements of an input tensor are retrieved from the holding memory. ” [¶0068]) Regarding claim 7 , Hill/Buckler/Zhou teaches The processing system of Claim 6, Hill wherein the at least the portion of the first input is discarded further based on portions of the first input used in performing one or more additional convolutions for layers of the neural network deeper than the second layer of the neural network. (“ In a subsequent stage (not shown), the memory spots used by no-longer needed element strips of the layer 503 will have been freed and reused in the same way as described above in reference to the layer 502. In further subsequent stages, the same will also apply to the layer 504, once it has enough input elements to populate an entire stretched receptive field (not shown) and provide values for the input tensor for the layer 505. Note that the values for the input tensor for the layer 505 may be written out to the holding memory if, for example, there is insufficient operational memory to store it. ” [¶0071]) Regarding claim 8 , Hill/Buckler/Zhou teaches The processing system of Claim 1, Hill teaches wherein to partition the first input, the one or more processors are configured to cause the processing system to partition the first input such that the first set of channels has a different number of channels as the second set of channels. (“ As noted above, any of the above-described layers may have multiple kernels (e.g., for multiple channels) which would yield multiple corresponding input tensors (e.g., feature maps) for the subsequent layer. ” [¶0073 ; multiple kernels implies different filter sizes thus corresponds to a different number of channels. ]) Regarding claim 9 , Hill/Buckler/Zhou teaches T he processing system of Claim 1, Hill teaches wherein to convolve the second input into a second output via the second layer of the neural network, the one or more processors are configured to cause the processing system to process the first set of channels based on identity weights between an input and an output of the second layer of the neural network and to process the second input based on convolutional weights defined in the second layer of the neural network. (“A s can be seen, in this illustrative example, the processing of the input vector to layer 402 to generate the values for layer 405 requires (1) reading from the holding memory only the elements of the input vector, as well as the weights for the kernels of layers 402, 403, and 404, (2) writing to the holding memory only the calculated values of layer 405, and (3) keeping in the operational memory, during the processing, only the weights of the kernels for the layers 402, 403, and 404, the values in the receptive fields 407, 409, and 411, and the corresponding value 412. ” [¶0060]) Regarding claim 10 , Hill/Buckler/Zhou teaches The processing system of Claim 1, Hill teaches wherein the one or more processors are further configured to cause the processing system to: concatenate the first set of channels and the second output into a third input for a third layer of the neural network (“ The outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map (e.g., 220) receiving input from a range of neurons in the previous layer (e.g., feature maps 218) and from each of the multiple channels. ” [¶0042]) ; and convolve the third input into a third output via the third layer of the neural network (“ a second convolutional layer configured to use a second-layer kernel for convolving the second-layer input tensor to generate a third-layer input tensor, and a third layer configured to receive the third-layer input tensor. ” [¶0008]) , wherein: the third output merges a first receptive field generated by the first layer of the neural network, a second receptive field generated by the second layer of the neural network (“ In a subsequent third stage of processing DCN 401(3), a next element 406(3) of the input vector for the layer 402 is read into the operational memory from the holding memory to form the next receptive field 407(3), which is then matrix-multiplied by the first kernel of layer 402 to generate an output value 408(3) for the layer 403. ” [¶0055]) , and a third receptive field generated by the third layer of the neural network (“ In a subsequent third stage of processing DCN 401(3), a next element 406(3) of the input vector for the layer 402 is read into the operational memory from the holding memory to form the next receptive field 407(3), ” [¶0055]) ; and and the one or more actions are taken further based, at least in part, on the third output. (“The output of the deep convolutional network 350 is a classification 366 for the input data 352. The classification 366 may be a set of probabilities, where each probability is the probability of the input data including a feature from a set of classification features.” [¶0048 ; classification corresponds to an action. ]) Zhou further teaches the third receptive field covers a smaller receptive field in the first input than the first receptive field and the second receptive field (“It is necessary to ensure that a receptive field of the convolutional layer equivalent to the linear operation is less than or equal to a receptive field of the first convolutional layer.” [¶0015 ; receptive field of the convolutional layer is less than or equal to a receptive field of the first convolutional layer would imply the first receptive field covers a larger receptive field than the second receptive field. ]) Same motivation to combine the teachings of Hill/Buckler/Zhou as claim 1. Regarding claim 1 1 , it is substantially similar to claim 1 respectively, and is rejected in the same manner, the same art, and reasoning applying. Regarding claims 12- 20 , they are substantially similar to claim s 2- 10 respectively, and are rejected in the same manner, the same art, and reasoning applying. Regarding claim 21 , it is substantially similar to claim 1 respectively, and is rejected in the same manner, the same art, and reasoning applying. Regarding claims 22 - 30 , they are substantially similar to claim s 2- 10 respectively, and are rejected in the same manner, the same art, and reasoning applying. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MICHAEL H HOANG whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-8491 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Mon-Fri 8:30AM-4:30PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Kakali Chaki can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-3719 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL H HOANG/ PRIMARY EXAMINER, Art Unit 2122