Prosecution Insights
Last updated: April 19, 2026
Application No. 18/586,847

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

Non-Final OA §102§103§112
Filed
Feb 26, 2024
Examiner
ADU-JAMFI, WILLIAM NMN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
25
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “distribute” in claim 1 renders the claim indefinite. Claim 1 recites the limitation, “distribute the second mask to the convolutional layers of the second convolutional neural network, based on the resolutions used in the convolutional layers of the second convolutional neural network.” There is not sufficient clarity as to how the second mask is applied to, used by, or incorporated into the convolutional layers. It is therefore unclear whether “distribute” encompasses applying the mask to feature maps, providing the mask as an input to layers, gating activations, storing the mask in memory accessible to the layers, or some other operation. As claims 5 and 6 contain identical subject matter, they are also rejected. Accordingly, claims 2-4 and 7-9 are rejected for depending on claims 1 and 6, respectively. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3 and 6-8 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Habibian et. al (“Skip-Convolutions for Efficient Video Processing”). Regarding Claim 1, Habibian teaches an image processing apparatus comprising (Habibian: Fig. 1 (shown below)): PNG media_image1.png 278 357 media_image1.png Greyscale at least one memory storing instructions (Habibian: 3.2.2 Gumbel Gate); Structured Sparsity: “First, block structures can be leveraged to reduce the memory overhead involved in gathering and scattering of input and output tensors.” and at least one processor configured to execute the instructions to (Habibian: 4.1. Object Detection): Implementation details: “All models are trained with mini-batches of size 4 using four GPUs, where synchronized batch-norm is used to handle small effective batch sizes.” generate a first mask based on a difference between a first frame image and a second frame image, or a difference between a first output feature map that is output from a first convolutional layer of a first convolutional neural network for processing the first frame image and a second output feature map that is output from a first convolutional layer of a second convolutional neural network for processing the second frame image (Habibian: 3. Skip Convolution and 3.1 Convolution on Residual Frames); 3. Skip Convolutions: “Instead of treating a video as a sequence of still images, we represent it as a series of residual frames defined both for the input frames and for intermediate feature maps.” 3.1. Convolution on Residual Frames: “rt represents the residual frame as the difference between the current and previous feature maps xt − xt−1…to save even further, we introduce a gating function for each convolutional layer, g : R ci×h×w → {0, 1} h×w, to predict a binary mask indicating which locations should be processed, and taking only rt as input.” Explanation: This explicitly shows the following sequence: difference between frames [Wingdings font/0xE0] residual [Wingdings font/0xE0] used to generate binary mask, with a direct disclosure of feature-map differences. generate a second mask for each of resolutions used in convolutional layers of the second convolutional neural network, based on the first mask and each of the resolutions (Habibian: Introduction and 3.1 Convolution on Residual Frames); Introduction: “Each convolutional layer is coupled with a gating function learned to distinguish between the residuals that are important for the model accuracy and background regions that can be safely ignored (Fig. 1).” 3.1. Convolution on Residual Frames: “To save even further, we introduce a gating function for each convolutional layer, g : R ci×h×w → {0, 1} h×w, to predict a binary mask indicating which locations should be processed, and taking only rt as input.” Explanation: Since each convolutional layer can have different spatial resolutions, this establishes per-layer (per-resolution) mask generation. and distribute the second mask to the convolutional layers of the second convolutional neural network, based on the resolutions used in the convolutional layers of the second convolutional neural network (Habibian: Introduction and 3.1 Convolution on Residual Frames). Introduction: “Each convolutional layer is coupled with a gating function learned to distinguish between the residuals that are important for the model accuracy and background regions that can be safely ignored (Fig. 1).” 3.1. Convolution on Residual Frames: “To save even further, we introduce a gating function for each convolutional layer, g : R ci×h×w → {0, 1} h×w, to predict a binary mask indicating which locations should be processed, and taking only rt as input.” PNG media_image2.png 69 334 media_image2.png Greyscale Explanation: The mask g(rt) is applied directly within every convolutional layer. Regarding Claim 2, Habibian teaches the image processing apparatus according to claim 1, wherein the at least one processor is further configured to execute the instructions to: generate the second mask by executing pooling processing on the first mask (Habibian: 3.2.2 Gumbel Gate). Structured Sparsity: “More specifically, we add a max-pooling layer with the kernel size and stride of b followed by a nearest neighbor upsampling with the same scale factor of b. This enforces the predicted gates to have b × b structure, as illustrated in Figure 3.” Explanation: This shows max-pooling applied to the gate/mask, which then generates the second structured mask. Regarding Claim 3, Habibian teaches the image processing apparatus according to claim 1, wherein the at least one processor is further configured to execute the instructions to: every time a resolution used in the convolutional layers changes, generate the second mask based on the changed resolution (Habibian: 3.2.2 Gumbel Gate). 3.2.2 Gumbel Gate: “For each convolutional layer l we define a light-weight gating function f(rt; φl), parameterized by φl , as a convolution with a single output channel…To generate masks of the same resolution, the gate function uses the same kernel size, stride, padding, and dilation as its corresponding layer.” Explanation: This is a direct disclosure that mask resolution adapts whenever layer resolution changes. Regarding Claim 5, it is rejected for the same reasons set forth in claim 1 because it is a method claim that performs substantially the same steps. Regarding Claim 6, it is rejected for the same reasons set forth in claim 1 because it recites a non-transitory computer readable recording medium that performs substantially the same steps. Regarding Claim 7, Habibian teaches the non-transitory computer readable recording medium according to claim 6, and additional limitations are met as in the consideration of claim 2 above. Regarding Claim 8, Habibian teaches the non-transitory computer readable recording medium according to claim 6, and additional limitations are met as in the consideration of claim 3 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Habibian et. al in view of Shaikh (“Smoothing of a Noisy Image Using Different Low Pass Filters”). Regarding Claim 4, Habibian teaches the image processing apparatus according to claim 1, but fails to teach that the one processor is further configured to execute the instructions to: remove noise by executing blurring processing on the first frame image and the second frame image, or on the first output feature map and the second output feature map. However, Shaikh teaches that blurring (smoothing/low-pass filtering) is used for noise removal, stating that “Low pass filters are basically used for removing noise from image” (Abstract). Shaikh further states that “Low pass filtering (aka smoothing), is employed to remove high spatial frequency noise from a digital image” (IV. FILTERING). Shaikh additionally explains that blurring filters such as mean filters and Gaussian filters are used for smoothing images, stating that “mean filter is the basic filter for blurring image…Gaussian blur is the fastest of the three options and is usually good for most applications” (IV. CONCLUSIONS). Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate noise removal and blurring into Habibian’s system. Shaikh teaches that blurring is used to remove noise and improve the quality of image data for further processing. Therefore, adding a denoising step such as blurring to Habibian’s frame or feature map inputs would have been a predictable design choice to improve the quality and robustness of the data used for subsequent difference calculation and mask generation. Regarding Claim 9, Habibian teaches the non-transitory computer readable recording medium according to claim 6, and additional limitations are met as in the consideration of claim 4 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wei et. al (CN114118425A) teaches a dynamic sparse convolution method for accelerating neural network interference by generating data masks and channel masks from input feature maps to predict redundant computation locations and guide convolution layers to process only important positions, thereby skipping unnecessary operations and reducing computation while maintaining accuracy. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM ADU-JAMFI whose telephone number is (571)272-9298. The examiner can normally be reached M-T 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM ADU-JAMFI/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §102, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month