Prosecution Insights
Last updated: April 19, 2026
Application No. 18/203,342

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM

Final Rejection §103§DP
Filed
May 30, 2023
Examiner
HAIDER, SYED
Art Unit
2633
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
709 granted / 850 resolved
+21.4% vs TC avg
Minimal +4% lift
Without
With
+4.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
22.9%
-17.1% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 850 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, filed on 10/30/2025, with respect to claims rejection under 35 U.S.C 101 have been fully considered and are persuasive in view of amendments. The rejection of claims has been withdrawn. Applicant’s arguments with respect to rejection claim(s) 1-3, 5, and 7-10, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Further the Drawings filed on 5/30/2023, has been accepted. Claim Objections Claim 7, objected to because of the following informalities: In claim 7, line 5, recites “it is”, however Examiner suggests utilizing corresponding language to “it is”, in the claim rather than claiming “it is”. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, and 9-10, of instant application No. 18/203,342, are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, and 8-9, of copending Application No. 18/203,383 in view of Guan (US PGPUB 2021/0150198 A1), as being explained below. Co-pending application claims 1, and 8-9, discloses all the claim limitations of the instant application claims 1 and 9-10, respectively, except an acquisition process of acquiring sensor information from a camera; a recognition process of recognizing an action of the person based on a relevance between a plurality of different types of features of the person and information pertaining to the object; and an output process of outputting a measurement result obtained in the measurement process to a display apparatus. Guan disclose an acquisition process of acquiring sensor information from a camera (Guan, Fig. 1:1:2); a recognition process of recognizing an action of the person based on a relevance between a plurality of different types of features of the person and information pertaining to the object (Guan, Fig. 1:1, and paragraphs 5, 46, 89 and 130); and an output process of outputting a measurement result obtained in the measurement process to a display apparatus (Guan, Fig. 2:17:23). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify co-pending application teachings to arrive at the instant application teachings, as taught by Guan. The motivation would be to monitor plurality of targets with high accuracy (paragraph 132), as taught by Guan. This is a provisional nonstatutory double patenting rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5, and 7-10, is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US PGPUB 2023/0063926 A1) and further in view of Guan (US PGPUB 2021/0150198 A1). As per claim 1, Zhang discloses an information processing apparatus (Zhang, Fig. 1:100), comprising at least one processor (Zhang, Fig. 1:132), the at least one processor carrying out: an acquisition process of acquiring sensor information from a camera (Zhang, Fig. 1:110:130); a detection process of detecting a person and an object based on sensor information (Zhang, Fig. 1:138:140-146, and paragraphs 81, 86 and 89, discloses detection of person and package); a recognition process of recognizing an action of the person based on a relevance between the person and information pertaining to the object (Zhang, Fig. 1:152, and paragraphs 101-102, and 127, discloses The action recognition module 152 is first configured to determine whether the person p.sub.i picks up, holds, or drops the package q.sub.j at time t according to the person's keypoints information in its tracking trajectory f.sub.i.sup.t, and the package's bounding box information in its tracking trajectory g.sub.j.sup.t.); and a measurement process of measuring, based on a recognition result of the action, a time period for which the person has continued the action (Zhang, paragraph 102, discloses the action recognition module 152 is further configured to update the action sets A.sup.t−1 to A.sup.t, i.e., if a person p.sub.i picks up a package q.sub.j, this module starts to record a potential action a.sub.i,j.sup.t; if a person p.sub.i holds or drops a package q.sub.j, this module updates related action a.sub.i,j.sup.t−1 to a.sub.i,j.sup.t with person and package position information;); and an output process of outputting a measurement result obtained in the measurement process to a display apparatus (Zhang, Fig. 1:154:190, and paragraphs 101, 108 and 126). Although Zhang discloses a recognition process of recognizing an action of the person based on a relevance between the person and information pertaining the object, as being explained above however Zhang does not explicitly disclose recognizing an action of the person based on a relevance between a plurality of different types of features of the person and information pertaining to the object. Though said limitation would have been obvious in view of Zhang teachings since Zhang discloses in paragraph 127, “The I3D model is a 3D convolutional neural network and trained by video clips in the two action classes. Notably, other measurements (e.g., body keypoints trajectories and package trajectories) and association algorithms (e.g., pose estimation method and other action recognition methods etc.) can also be used to do this task”. Further said limitation is well known in the art for instance Guan discloses recognizing an action of the person based on a relevance between a plurality of different types of features of the person and information pertaining to the object (Guan, Fig. 1:1:, and paragraphs 5, 46, 89 and 130, discloses monitoring target…such as the worker, the motion of each worker is recognized for each of plural parts such as the head and the body of the worker, and based on the recognition result of the motion of the head and the recognition result of the motion of the body, the state of the worker (during meal, shelving a product, walking, etc.) may be recognized). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang teachings by implementing an action recognition processor to the system, as taught by Guan. The motivation would be to monitor plurality of targets with high accuracy (paragraph 132), as taught by Guan. A per claim 2, Zhang in view of Guan further discloses the information processing apparatus according to claim 1, wherein: in the recognition process, the at least one processor refers to action identification information to recognize an action of the person detected in the detection process (Zhang, paragraph 101, discloses The action recognition module 152 starts an action when the person picks up the package, continues when the person holds and drops off the package), the action identification information indicating a relevance between a feature of a person in a predetermined action and a feature of an object related to the predetermined action (Zhang, paragraphs 89, 100, 102 and 141, discloses The detection model service 170 is configured to provide the two categories of detection-person and package, and to provide bounding box and keypoints for the person category and only bounding box for the package category. The configuration is advantageous by having different detection features for different detection categories. In certain embodiments, the training of the detection model service 170 uses both moving and stationary persons, and both moving and stationary packages, so that the detection model service 170 is able to detect the persons and packages in the video frames more completely). As per claim 3, Zhang in view of Guan further discloses the information processing apparatus according to claim 1, wherein: in the recognition process, the at least one processor uses an inference model to recognize an action of the person detected in the detection process (Zhang, paragraphs 89, 102, 105, and 149, discloses if a person p.sub.i picks up a package q.sub.j, this module starts to record a potential action a.sub.i,j.sup.t; if a person p.sub.i holds or drops a package q.sub.j, this module updates related action a.sub.i,j.sup.t−1 to a.sub.i,j.sup.t with person and package position information; if a person p.sub.i drops a package q.sub.j for a while, this module updates related action a.sub.i,j.sup.t−1 to a.sub.i,j.sup.t with package position information); into the inference model, information indicating a feature of the person and information pertaining to the object are input (Zhang, paragraphs 89 and 141, discloses person’s features and object category); and from the inference model, information indicating a relevance between the person and the object in a predetermined action is output (Zhang, paragraphs 89, 126, 141, and 156, discloses the recognition module 152 end the action, and sends the rough handling action to the output module 154.). A per claim 5, Zhang in view of Guan further discloses the information processing apparatus according to claim 1, wherein: in the recognition process, in a case where an action of the person is not any of a plurality of predetermined actions, the at least one processor recognizes that the action of the person is an unidentified action (Zhang, paragraph 105, discloses recognition module 152 determines at procedure 506 that the person and the package for calculating their distance do not belong to an action, i.e., unrecognized action); and in the measurement process, the at least one processor carries out measurement while adding a time period for which the unidentified action has been continued to a time period for which the person has continued another action different from the unidentified action (Zhang, paragraphs 101-102 and 105, discloses recognition module 152 determines at procedure 506 that the person and the package for calculating their distance do not belong to an action, and the calculated distance between them equals to or is less than the predetermined distance of 10 pixels, at procedure 508, the recognition module 152 starts an action, and defines the status of the action at the current time as pickup). As per claim 7, Zhang in view of Guan further discloses the information processing apparatus according to claim 1, wherein: the action which has been recognized in the recognition process is an operation included in a predetermined process (Zhang, paragraph 101, 108 and 126, discloses recognition module 152 sends the rough handling action information to the and the score to the output module 154); and in the output process, the at least one processor outputs, in a form in which it is recognizable that each of operations is included in the predetermined process (Zhang, paragraphs 101-102, discloses The action recognition module 152 is first configured to determine whether the person p.sub.i picks up, holds, or drops the package q.sub.j at time t according to the person's keypoints information in its tracking trajectory f.sub.i.sup.t, and the package's bounding box information in its tracking trajectory g.sub.j.sup.t), information indicating a time period for which the person has continued that operation (Zhang, paragraph 102, discloses the action recognition module 152 is further configured to update the action sets A.sup.t−1 to A.sup.t, i.e., if a person p.sub.i picks up a package q.sub.j, this module starts to record a potential action a.sub.i,j.sup.t; if a person p.sub.i holds or drops a package q.sub.j, this module updates related action a.sub.i,j.sup.t−1 to a.sub.i,j.sup.t with person and package position information;). A per claim 8, Zhang in view of Guan further discloses the information processing apparatus according to claim 1, wherein: in the output process, the at least one processor outputs measurement results respectively related to the actions in an order based on the measurement results (Zhang, paragraphs 89, 102 and 153, discloses after an action is determined to be a rough action, the action recognition module 152 may optionally calculate a rough handling score for the action, and sends the determined rough handling action and its rough handling score to the output module 154. In certain embodiments, the action recognition module 152 defines a high rough handing score of 3 and a light rough handling score of 1). As per claim 9, please see the analysis of claim 1. As per claim 10, Zhang further discloses a computer-readable non-transitory storage medium storing a program for causing a computer to function as an information processing apparatus (Zhang, paragraphs 47 and 157), the program causing the computer to carry out: For rest of claim limitations please see the analysis of claim 1. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED Z HAIDER whose telephone number is (571)270-5169. The examiner can normally be reached MONDAY-FRIDAY 9-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SAM K Ahn can be reached at 571-272-3044. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED HAIDER/Primary Examiner, Art Unit 2633
Read full office action

Prosecution Timeline

May 30, 2023
Application Filed
Jul 31, 2025
Non-Final Rejection — §103, §DP
Sep 13, 2025
Interview Requested
Sep 15, 2025
Interview Requested
Sep 25, 2025
Examiner Interview Summary
Sep 25, 2025
Applicant Interview (Telephonic)
Oct 30, 2025
Response Filed
Dec 11, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602430
Method for Constructing Positioning DB Using Clustering of Local Features and Apparatus for Constructing Positioning DB
2y 5m to grant Granted Apr 14, 2026
Patent 12604296
NETWORKED ULTRAWIDEBAND POSITIONING
2y 5m to grant Granted Apr 14, 2026
Patent 12597163
Systems and Methods to Optimize Imaging Settings for a Machine Vision Job
2y 5m to grant Granted Apr 07, 2026
Patent 12586394
METHOD, APPARATUS AND SYSTEM FOR AUTO-LABELING
2y 5m to grant Granted Mar 24, 2026
Patent 12579676
EGO MOTION-BASED ONLINE CALIBRATION BETWEEN COORDINATE SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
88%
With Interview (+4.4%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 850 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month