Prosecution Insights
Last updated: April 19, 2026
Application No. 18/277,719

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND RECORDING MEDIUM

Non-Final OA §103
Filed
Aug 17, 2023
Examiner
SHAW, PETER C
Art Unit
2493
Tech Center
2400 — Computer Networks
Assignee
NEC Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
422 granted / 553 resolved
+18.3% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 553 resolved cases

Office Action

§103
DETAILED ACTION Claims 14, 16 and 18-22 are pending in this action. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claim s 14 , 16 , 18 and 20 are rejected under 35 U.S.C. 10 3 as being unpatentable over Chen et al. (CN-102819023-A) [hereinafter “Chen”] in view of Yu et al. (WO-2010032297-A1) [hereinafter “Yu”] . As per claim 14, Chen teaches a n information processing apparatus comprising at least one processor, the at least one processor carrying out: an obtaining means for process of obtaining input data which includes at least one of image data ([0014], using image data as input) and point cloud data ( [0004] and [0037], using point cloud data to remove vegetation from input data ) ; and an estimating means for process of estimating levels of importance with respect to a respective plurality of characteristic which are included in a frame indicated by the input data ([0092], estimating levels of importance in characteristic of an image and changing associated pixels to create training set see [0012] ) , the estimating means having with use of an inference model which has been trained with reference to replaced data that has been obtained by replacing at least one of the plurality of characteristic , which are included in the input data, with alternative data ([0012], changing elements of a pixel data set associated with a characteristic to create a training set used to train whether a image is landslide or non-landslide) in accordance with the levels of importance ([0011], pixels chosen based on importance determination see [0092]) . Chen does not explicitly teach estimating levels of importance with respect to a respective plurality of regions which are included in a frame. Yu teaches estimating levels of importance with respect to a respective plurality of regions which are included in a frame ( Abstract, marking areas of an image with levels of importance ). At the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen with the teachings of Yu , estimating levels of importance with respect to a respective plurality of regions which are included in a frame , to directly mark areas in an image that are of interest for training purposes. As per claim 16, the substance of the claimed invention is identical or substantially similar to that of claim 14. Accordingly, this claim is rejected under the same rationale. As per claim 1 8 , the substance of the claimed invention is identical or substantially similar to that of claim 14. Accordingly, this claim is rejected under the same rationale. As per claim 20, the combination of Chen and Yu teaches the information processing apparatus as set forth in claim 14, wherein: the at least one processor further carries out an evaluating process of deriving an evaluation value by referring to the replaced data; and in the evaluating process, the at least one processor derives the evaluation value with reference to an output obtained from a controller of a movable body into which the replaced data has been inputted (Chen; [0012], generated training set is inputted back into system along with LiDAR component, [0004], which is airborne radar and thus has a moveable body) . Claim 19 is rejected under 35 U.S.C. 10 3 as being unpatentable over Chen and Yu in further view of Jiang et al. (CN-112257647-A) [hereinafter “Jiang”] . As per claim 19, the combination of Chen and Yu teaches the information processing apparatus as set forth in claim 14 . The combination of Chen and Yu does not explicitly teach wherein, in the estimating process, the inference model estimates the levels of importance with use of a self-attention module. Jiang teaches wherein, in the estimating process, the inference model estimates the levels of importance with use of a self-attention module (Abstract, using self-attention mechanism to evaluate importance of image data). At the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen and Yu with the teachings of Jiang, wherein, in the estimating process, the inference model estimates the levels of importance with use of a self-attention module , to improve the accuracy and relevance of the input image data. Claim 19 is rejected under 35 U.S.C. 10 3 as being unpatentable over Chen and Yu in further view of Jiang et al. (CN-112257647-A) [hereinafter “Jiang”] . Claim 21 is rejected under 35 U.S.C. 10 3 as being unpatentable over Chen and Yu in further view of Yeh et al. ( WO-201912897 1 -A1 ) [hereinafter “ Yeh ”] . As per claim 21, the combination of Chen and Yu teaches the information processing apparatus as set forth in claim 20 . The combination of Chen and Yu does not explicitly teach wherein: the at least one processor further carries out a training process of training the inference model with reference to the evaluation value; the evaluation value includes a reward value derived from the output; and in the training process, the at least one processor trains the inference model so that the reward value becomes high . Yeh teaches wherein: the at least one processor further carries out a training process of training the inference model with reference to the evaluation value (Page 6, para. 1, using a reward function to train a neural network in manipulating cell image data) ; the evaluation value includes a reward value derived from the output; and in the training process, the at least one processor trains the inference model so that the reward value becomes high (Page 6, para. 1, reward is maximized to train model) . At the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen and Yu with the teachings of Yeh, wherein: the at least one processor further carries out a training process of training the inference model with reference to the evaluation value; the evaluation value includes a reward value derived from the output; and in the training process, the at least one processor trains the inference model so that the reward value becomes high , to improve the accuracy and relevance of the input image data. Claim 2 2 is rejected under 35 U.S.C. 10 3 as being unpatentable over Chen and Yu in further view of Shao et al. (WO-201 8 12 1 690 -A1) [hereinafter “ Shao ”] . As per claim 22, the combination of Chen and Yu teaches the information processing apparatus as set forth in claim 20 . The combination of Chen and Yu does not explicitly teach wherein: the at least one processor further carries out a training process of training the inference model with reference to the evaluation value; the evaluation value is a loss value derived from the output; and in the training process, the at least one processor trains the estimating means so that the loss value becomes low. Shao teaches wherein: the at least one processor further carries out a training process of training the inference model with reference to the evaluation value ( Page 8, para. 4-6, training model based on determined loss value ) ; the evaluation value is a loss value derived from the output see id ; and in the training process, the at least one processor trains the estimating means so that the loss value becomes low ( Page 8, para. 6, minimizing loss value below average value ). At the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen and Yu with the teachings of Shao , wherein: the at least one processor further carries out a training process of training the inference model with reference to the evaluation value; the evaluation value is a loss value derived from the output; and in the training process, the at least one processor trains the estimating means so that the loss value becomes low , to improve the accuracy and relevance of the input image data. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yamaji (US PGPUB No. 2016/0086342), Ju et al. (US PGPUB No. 2021/0174482), Ramanathan et al. (US Patent No. 11,048,973), Murata et al. (WO-2020110580-A1), Chai et al. (CN-112231516-A), Sugio et al. (WO-2020162495-A1), Wong et al. ( "Characterization of perceptual importance for object-based image segmentation," Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101), Vancouver, BC, Canada, 2000, pp. 54-57 vol.3, doi : 10.1109/ICIP.2000.899287 ), Shibata et al. ( "Unified Image Fusion Framework With Learning-Based Application-Adaptive Importance Measure," in IEEE Transactions on Computational Imaging, vol. 5, no. 1, pp. 82-96, March 2019, doi : 10.1109/TCI.2018.2879021 ), Pan et al. ( "Label and Sample: Efficient Training of Vehicle Object Detector from Sparsely Labeled Data," arXiv:1808.08603, Aug. 26, 2018 ) and Jyothi et al. ( "Research study of neural networks for image categorization and retrieval," 2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE), Singapore, 2010, pp. 686-690, doi : 10.1109/ICCAE.2010.5451728 ) all disclose various aspects of the claimed invention including determining importance levels in an image using point cloud data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT PETER C SHAW whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7179 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Max Flex . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Carl Colin can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-3862 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent- center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER C SHAW/ Primary Examiner, Art Unit 2493 March 2, 2026
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Mar 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566852
NEFARIOUS CODE DETECTION USING SEMANTIC UNDERSTANDING
2y 5m to grant Granted Mar 03, 2026
Patent 12547696
WIRELESS BATTERY MANAGEMENT SYSTEM SAFETY CHANNEL COMMUNICATION LAYER PROTOCOL
2y 5m to grant Granted Feb 10, 2026
Patent 12536342
SOC ARCHITECTURE WITH SECURE, SELECTIVE PERIPHERAL ENABLING/DISABLING
2y 5m to grant Granted Jan 27, 2026
Patent 12511438
DYNAMIC PROVISION OF SOFTWARE APPLICATION FEATURES
2y 5m to grant Granted Dec 30, 2025
Patent 12513190
SNAPSHOT FOR ACTIVITY DETECTION AND THREAT ANALYSIS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+35.7%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 553 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month