Prosecution Insights
Last updated: April 19, 2026
Application No. 17/434,625

NEURAL NETWORK, COMPUTATION METHOD, AND RECORDING MEDIUM

Non-Final OA §103
Filed
Aug 27, 2021
Examiner
HONORE, EVEL NMN
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Panasonic Intellectual Property Management Co., Ltd.
OA Round
4 (Non-Final)
39%
Grant Probability
At Risk
4-5
OA Rounds
4y 5m
To Grant
85%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
7 granted / 18 resolved
-16.1% vs TC avg
Strong +46% interview lift
Without
With
+46.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
42.6%
+2.6% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
1.1%
-38.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Applicant’s arguments, see Response After Final action, filed on 01/16/2026, with respect to Final Rejection have been fully considered and are persuasive. The Final rejection of 11/03/2025 has been withdrawn. This action is responsive to the After Final filed on 01/26/2026. No claims have been amended. Claims 1-6 are pending in this case. Claims 1, 5 and 6 are independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over GAO et al. (US Pub No.: 20190114511 A1), hereinafter referred to as GAO, in view of YANG et al. (US Pub No.: 20180181838 A1), hereinafter referred to as YANG. With respect to claim 1, GAO disclose: A neural network, comprising: an input layer to which input information is input (In paragraph [0107], GAO discloses that each convolutional neural network (CNN) takes input from features in the layer.) An output layer from which the feature volume extracted is output, wherein each of the plurality of blocks includes: a residual block formed by combining one or more first convolutional layers and a skip connection which is a connection that bypasses the one or more first convolutional layers (In paragraph [0107], GAO discloses that a convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent. It includes one or more convolutional layers. In paragraph [0109], GAO discloses that the first convolution layer takes an input feature map that is 28 by 28 pixels and has 1 channel. It creates an output feature map that is 26 by 26 pixels and has 32 channels. This layer uses 32 filters on the input. Each of the 32 output channels shows a 26 by 26 grid of values, which shows how well each filter works on different parts of the input. In Fig. 8 teaches two CNN layers (and in fact more than two). CNN layer 1: Dilated Convolution and CNN layer 2: 1x1 convolution (inside the residual block). In Fig. 8 and paragraph [0162], further depicts one implementation of residual blocks and skip-connections. Residual blocks include a shortcut that lets some information skip the usual processing: the output is the sum of the processed input and the original input.) A connection block which includes at least a second convolutional layer and improves non-linearity of an input of each of the plurality of blocks and an output of each of the plurality of blocks by equalizing an output of the one or more first convolutional layers and output of the skip connection (In paragraph [0113], GAO discloses that the convolutional neural network is changed or trained so that the input data gives a certain output. It is adjusted using back propagation by comparing the output to the correct answer until the output gets closer to the correct answer. In Fig. 8 and paragraph [0163], GAO discloses CNN layer 2: 1x1 convolution (inside the residual block). The residual blocks improve training by adding identifying skip-connections. Convolutional feedforward networks connect the output of the l.sup.th layer as input to the (l+1).sup.th layer, which gives rise to the following layer transition: x.sub.l=H.sub.l(x.sub.l−1). Residual blocks add a skip-connection that bypasses the non-linear transformations with an identify function: x.sub.l=H.sub.l(x.sub.l−1)+x.sub.l−1.) With respect to claim 1, GAO do not explicitly disclose: a plurality of blocks to be used to extract a feature volume from the input information However, it is known by YANG to disclose: A plurality of blocks to be used to extract a feature volume from the input information (In paragraph [0043], YANG discloses that the input image processed through the convolution operation may be substantially divided into various components. The one input image may be divided into pieces of image data for expressing a color and a contrast associated with each RGB (red, green, blue) component.) GAO and YANG are analogous pieces of art because both references concern the ResNet -style architecture. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify GAO, with residual blocks and skip-connections as taught by GAO, with input images divided into pieces of image data for expressing a color and a contrast associated with each RGB (red, green, blue) component as taught by YANG. The motivation for doing so would have been to improve accuracy for image classification and object detection (See [0163] of GAO). With respect to claim 5, GAO disclose: An output layer from which the feature volume extracted is output, the method comprising (In paragraph [0107], GAO discloses a convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent. It includes one or more convolutional layers. In paragraph [0109], GAO discloses that the first convolution layer takes an input feature map that is 28 by 28 pixels and has 1 channel. It creates an output feature map that is 26 by 26 pixels and has 32 channels. This layer uses 32 filters on the input. Each of the 32 output channels shows a 26 by 26 grid of values, which shows how well each filter works on different parts of the input.) Inputting first information to a residual block included in the plurality of blocks and formed by combining one or more first convolutional layers and a skip connection which is a connection that bypasses the one or more first convolutional layers (In paragraph [0107], GAO discloses that a convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent. It includes one or more convolutional layers. In paragraph [0109], GAO discloses that the first convolution layer takes an input feature map that is 28 by 28 pixels and has 1 channel. It creates an output feature map that is 26 by 26 pixels and has 32 channels. This layer uses 32 filters on the input. Each of the 32 output channels shows a 26 by 26 grid of values, which shows how well each filter works on different parts of the input. In Fig. 8 teaches two CNN layers (and in fact more than two). CNN layer 1: Dilated Convolution and CNN layer 2: 1x1 convolution (inside the residual block). In Fig. 8 and paragraph [0162], further depicts one implementation of residual blocks and skip-connections. Residual blocks include a shortcut that lets some information skip the usual processing: the output is the sum of the processed input and the original input.) Inputting a feature volume extracted from the first information by the one or more first convolutional layers and the first information output by the skip connection to a connection block included in the plurality of blocks and including at least a second convolutional layer, to improve non-linearity of an input of each of the plurality of blocks and an output of each of the plurality of blocks by equalizing the feature volume in the first information and the first information (In paragraph [0107], GAO discloses that a convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent. It includes one or more convolutional layers. In paragraph [0109], GAO discloses that the first convolution layer takes an input feature map that is 28 by 28 pixels and has 1 channel. It creates an output feature map that is 26 by 26 pixels and has 32 channels. This layer uses 32 filters on the input. Each of the 32 output channels shows a 26 by 26 grid of values, which shows how well each filter works on different parts of the input. In Fig. 8 teaches two CNN layers (and in fact more than two). CNN layer 1: Dilated Convolution and CNN layer 2: 1x1 convolution (inside the residual block). In Fig. 8 and paragraph [0162], further depicts one implementation of residual blocks and skip-connections. Residual blocks include a shortcut that lets some information skip the usual processing: the output is the sum of the processed input and the original input.) (In paragraph [0113], GAO discloses that the convolutional neural network is changed or trained so that the input data gives a certain output. It is adjusted using back propagation by comparing the output to the correct answer until the output gets closer to the correct answer. In Fig. 8 and paragraph [0163], GAO discloses CNN layer 2: 1x1 convolution (inside the residual block). The residual blocks improve training by adding identifying skip-connections. Convolutional feedforward networks connect the output of the l.sup.th layer as input to the (l+1).sup.th layer, which gives rise to the following layer transition: x.sub.l=H.sub.l(x.sub.l−1). Residual blocks add a skip-connection that bypasses the non-linear transformations with an identify function: x.sub.l=H.sub.l(x.sub.l−1)+x.sub.l−1.) With respect to claim 5, GAO do not explicitly disclose: A method for computing a plurality of blocks that are included in a neural network and used to extract a feature volume from input information, the neural network including an input layer to which the input information is input, the plurality of blocks However, it is known by YANG to disclose: A method for computing a plurality of blocks that are included in a neural network and used to extract a feature volume from input information, the neural network including an input layer to which the input information is input, the plurality of blocks (In paragraph [0043], YANG discloses that the input image processed through the convolution operation may be substantially divided into various components. The one input image may be divided into pieces of image data for expressing a color and a contrast associated with each RGB (red, green, blue) component.) GAO and YANG are analogous pieces of art because both references concern the ResNet -style architecture. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify GAO, with residual blocks and skip-connections as taught by GAO, with input images divided into pieces of image data for expressing a color and a contrast associated with each RGB (red, green, blue) component as taught by YANG. The motivation for doing so would have been to improve accuracy for image classification and object detection (See [0163] of GAO). With respect to claim 6, GAO disclose: An output layer from which the feature volume extracted is output, the program causing a computer to execute (In paragraph [0107], GAO discloses a convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent. It includes one or more convolutional layers. In paragraph [0109], GAO discloses that the first convolution layer takes an input feature map that is 28 by 28 pixels and has 1 channel. It creates an output feature map that is 26 by 26 pixels and has 32 channels. This layer uses 32 filters on the input. Each of the 32 output channels shows a 26 by 26 grid of values, which shows how well each filter works on different parts of the input.) Inputting first information to a residual block included in the plurality of blocks and formed by combining one or more first convolutional layers and a skip connection which is a connection that bypasses the one or more first convolutional layers (In paragraph [0107], GAO discloses that a convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent. It includes one or more convolutional layers. In paragraph [0109], GAO discloses that the first convolution layer takes an input feature map that is 28 by 28 pixels and has 1 channel. It creates an output feature map that is 26 by 26 pixels and has 32 channels. This layer uses 32 filters on the input. Each of the 32 output channels shows a 26 by 26 grid of values, which shows how well each filter works on different parts of the input. In Fig. 8 teaches two CNN layers (and in fact more than two). CNN layer 1: Dilated Convolution and CNN layer 2: 1x1 convolution (inside the residual block). In Fig. 8 and paragraph [0162], further depicts one implementation of residual blocks and skip-connections. Residual blocks include a shortcut that lets some information skip the usual processing: the output is the sum of the processed input and the original input.) Inputting a feature volume extracted from the first information by the one or more first convolutional layers and the first information output by the skip connection to a connection block included in the plurality of blocks and including at least a second convolutional layer, to improve non-linearity of an input of each of the plurality of blocks and an output of each of the plurality of blocks by equalizing the feature volume in the first information and the first information (In paragraph [0107], GAO discloses that a convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent. It includes one or more convolutional layers. In paragraph [0109], GAO discloses that the first convolution layer takes an input feature map that is 28 by 28 pixels and has 1 channel. It creates an output feature map that is 26 by 26 pixels and has 32 channels. This layer uses 32 filters on the input. Each of the 32 output channels shows a 26 by 26 grid of values, which shows how well each filter works on different parts of the input. In Fig. 8 teaches two CNN layers (and in fact more than two). CNN layer 1: Dilated Convolution and CNN layer 2: 1x1 convolution (inside the residual block). In Fig. 8 and paragraph [0162], further depicts one implementation of residual blocks and skip-connections. Residual blocks include a shortcut that lets some information skip the usual processing: the output is the sum of the processed input and the original input.) (In paragraph [0113], GAO discloses that the convolutional neural network is changed or trained so that the input data gives a certain output. It is adjusted using back propagation by comparing the output to the correct answer until the output gets closer to the correct answer. In Fig. 8 and paragraph [0163], GAO discloses CNN layer 2: 1x1 convolution (inside the residual block). The residual blocks improve training by adding identifying skip-connections. Convolutional feedforward networks connect the output of the l.sup.th layer as input to the (l+1).sup.th layer, which gives rise to the following layer transition: x.sub.l=H.sub.l(x.sub.l−1). Residual blocks add a skip-connection that bypasses the non-linear transformations with an identify function: x.sub.l=H.sub.l(x.sub.l−1)+x.sub.l−1.) With respect to claim 6, GAO do not explicitly disclose: A non-transitory computer-readable recording medium having recorded thereon a program for performing a method for computing a plurality of blocks that are included in a neural network and used to extract a feature volume from input information, the neural network including an input layer to which the input information is input, the plurality of blocks However, it is known by YANG to disclose: A non-transitory computer-readable recording medium having recorded thereon a program for performing a method for computing a plurality of blocks that are included in a neural network and used to extract a feature volume from input information, the neural network including an input layer to which the input information is input, the plurality of blocks (In paragraph [0043], YANG discloses that the input image processed through the convolution operation may be substantially divided into various components. The one input image may be divided into pieces of image data for expressing a color and a contrast associated with each RGB (red, green, blue) component.) GAO and YANG are analogous pieces of art because both references concern the ResNet -style architecture. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify GAO, with residual blocks and skip-connections as taught by GAO, with input images divided into pieces of image data for expressing a color and a contrast associated with each RGB (red, green, blue) component as taught by YANG. The motivation for doing so would have been to improve accuracy for image classification and object detection (See [0163] of GAO). Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over GAO, in view of YANG and further in view of Dutta et al. (US Pub No.: 20190188537 A1), hereinafter referred to as Dutta. Regarding claim 2, GAO in view of YANG disclose the elements of claim 1. GAO in view of YANG do not explicitly disclose: The neural network according to claim 1, wherein the connection block includes: a second convolutional layer to which the output of the one or more first convolutional layers and the output of the skip connection are input a first output layer to which output of the second convolutional layer is input a weighting layer which adds a weight stored in advance to output of the first output layer and a third convolutional layer to which output of the weighting layer is input However, Dutta disclose the limitations: The neural network according to claim 1, wherein the connection block includes: a second convolutional layer to which the output of the one or more first convolutional layers and the output of the skip connection are input (In paragraph [0030], Dutta discloses first, the conv(k) operation comprises a simple convolutional filter layer 204 with a k×k filter size. Second, the rc_conv(k) operation comprises a first convolutional filter layer 208 having a k×1 filter size followed by a second convolutional filter layer 212 having a 1×k filter size. It will be appreciated that the structure of the rc_conv(k) operation reduces the number of parameters used.) a first output layer to which output of the second convolutional layer is input (In paragraph [0030], Dutta discloses first, the conv(k) operation comprises a simple convolutional filter layer 204 with a k×k filter size.) a weighting layer which adds a weight stored in advance to output of the first output layer (In paragraph [0033], Dutta discloses that the concat operation has a layer that combines the outputs (A, B, C, and D) from earlier steps. Second, the add_det operation has a layer that adds the outputs (A, B, C, and D) together. Third, the add_stc operation has a layer that also adds the outputs (A, B, C, and D), but this time each output is multiplied by a random number before adding. In some cases, these random numbers come from a uniform distribution.) a third convolutional layer to which output of the weighting layer is input (In paragraph [0030], Dutta discloses a third: The sp_conv(k) operation is a k×k depthwise separable convolution operation comprising a depthwise convolution layer 216 followed by a pointwise convolution layer. ) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of GAO in view of YANG before them, to include Dutta, with concatenating layer outputs provided by the preceding convolution operation branches in the feature dimension as taught by Dutta. The motivation for doing so would have been to increase the feature depth of the output of the combination layer (See[0034] of Dutta). Regarding claim 3, GAO in view of YANG disclose the elements of claim 1. GAO in view of YANG do not explicitly disclose: The neural network according to claim 1, wherein each of the plurality of blocks further outputs the output of the skip connection in addition to output of the connection block, and the connection block includes: a second convolutional layer to which the output of the one or more first convolutional layers and the output of the skip connection are input a first output layer to which output of the second convolutional layer is input a weighting layer which adds a weight stored in advance to output of the first output layer and a third convolutional layer to which output of the weighting layer is input However, Dutta disclose the limitations: The neural network according to claim 1, wherein the connection block includes: a second convolutional layer to which the output of the one or more first convolutional layers and the output of the skip connection are input (In paragraph [0030], Dutta discloses first, the conv(k) operation comprises a simple convolutional filter layer 204 with a k×k filter size. Second, the rc_conv(k) operation comprises a first convolutional filter layer 208 having a k×1 filter size followed by a second convolutional filter layer 212 having a 1×k filter size. It will be appreciated that the structure of the rc_conv(k) operation reduces the number of parameters used.) a first output layer to which output of the second convolutional layer is input (The building block takes the form of a residual network (which may be referred to elsewhere herein as a residual block) comprising a residual branch, a skip connection (which may be referred to elsewhere herein as a feedforward branch), and a summation element.) a weighting layer which adds a weight stored in advance to output of the first output layer (In paragraph [0033], Dutta discloses that the concat operation has a layer that combines the outputs (A, B, C, and D) from earlier steps. Second, the add_det operation has a layer that adds the outputs (A, B, C, and D) together. Third, the add_stc operation has a layer that also adds the outputs (A, B, C, and D), but this time each output is multiplied by a random number before adding. In some cases, these random numbers come from a uniform distribution.) a shortcut connection which skips the first output layer and the weighting layer (In paragraph [0024], Dutta disclose the building block takes the form of a residual network (which may be referred to elsewhere herein as a residual block) comprising a residual branch, a skip connection (which may be referred to elsewhere herein as a feedforward branch), and a summation element ) a third convolutional layer to which output of the weighting layer and output of the shortcut connection are input (In paragraph [0030], Dutta discloses a third: The sp_conv(k) operation is a k×k depth wise separable convolution operation comprising a depth wise convolution layer 216 followed by a pointwise convolution layer. ) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of GAO in view of YANG before them, to include Dutta, with concatenating layer outputs provided by the preceding convolution operation branches in the feature dimension as taught by Dutta. The motivation for doing so would have been to increase the feature depth of the output of the combination layer (See[0034] of Dutta). Regarding claim 4, GAO in view of YANG and Dutta disclose the elements of claim 2. In addition, GAO disclose: The neural network according to claim 2, wherein the first output layer outputs a value obtained by applying a softmax function to the output of the second convolutional layer input to the first output layer (In paragraph [0281], GAO discloses that the outputs from layer 2 go through 9 residual blocks (layers 3 to 11). The output from layer 3 goes to layer 4, and layer 4's output goes to layer 5, and so on. There are also connections that add the output of every 3rd residual block (layers 5, 8, and 11). The combined outputs are then sent to two 1D convolutions (layers 12 and 13) with ReLU activations. The output from layer 13 goes to the softmax layer, which calculates the probabilities of three possible results based on the input.) Response to Arguments Applicant's arguments filed on 01/16/2026 have been fully considered, and in part are persuasive in combination of applicants amendments and arguments. Pertaining to Rejection under 101 The 35 USC 101 rejection for Claims 1-6 is withdrawn. Pertaining to Rejection under 103 Applicant’s arguments in regard to the examiner’s rejections under 35 USC 103 are moot in view of the new grounds of rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVEL HONORE whose telephone number is (703)756-1179. The examiner can normally be reached Monday-Friday 8 a.m. -5:30 p.m. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EVEL HONORE Examiner Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Aug 27, 2021
Application Filed
Sep 27, 2024
Non-Final Rejection — §103
Dec 23, 2024
Response Filed
Apr 29, 2025
Non-Final Rejection — §103
Jul 11, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103
Dec 23, 2025
Interview Requested
Jan 08, 2026
Applicant Interview (Telephonic)
Jan 08, 2026
Examiner Interview Summary
Jan 16, 2026
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566942
System and Method For Generating Parametric Activation Functions
2y 5m to grant Granted Mar 03, 2026
Patent 12547946
SYSTEMS AND METHODS FOR FIELD EXTRACTION FROM UNLABELED DATA
2y 5m to grant Granted Feb 10, 2026
Patent 12547906
METHOD, DEVICE, AND PROGRAM PRODUCT FOR TRAINING MODEL
2y 5m to grant Granted Feb 10, 2026
Patent 12536156
UPDATING METADATA ASSOCIATED WITH HISTORIC DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12406483
ONLINE CLASS-INCREMENTAL CONTINUAL LEARNING WITH ADVERSARIAL SHAPLEY VALUE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
39%
Grant Probability
85%
With Interview (+46.4%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month