Prosecution Insights
Last updated: April 19, 2026
Application No. 18/033,007

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

Final Rejection §103
Filed
Apr 20, 2023
Examiner
JOHNSON-CALDERON, FRANK J
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
NEC Corporation
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
77%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
127 granted / 222 resolved
-0.8% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
21 currently pending
Career history
243
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
67.1%
+27.1% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 222 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 1, 4-7, 9-10 have been considered but are moot because the arguments do not apply to the new rejection made below. The examiner recommends further describing an unknown state in the claims to differentiate it from the prior art (e.g., by including limitations found in paragraph [0030] includes a position, a posture, a shape, a weight, and surface characteristics. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-6, 9, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugaya (US 20210311455) in view of Yoneyama (US 20210101282) and Rosenberg (US 10335962.) Regarding claim 1, “An information processing device for efficiently determining an abnormal state of a target device using a virtual environment comprising: one or more memories storing instructions; and one or more processors configured to execute the instructions to” Sugaya teaches (¶0022) a system for operation verification includes a computer to perform operation verification for a machine tool; (¶0079) computer includes, processor, memory and software/program; (¶0037-0¶0038) system allows for efficient and automatic correction. As to “generate virtual observation information obtained by observing a result of simulating the real environment in which a target device to be evaluated exists” Sugaya teaches (¶0025 and ¶0045) The computer 10 generates computer graphics (hereinafter referred to as “CG”) virtually showing the operation of the machine tool for the predetermined time from the acquired data. The computer 10 also generates the CG image of not only the machine tool but also the image of a work object of the machine tool. As to “and determine the abnormal state according to a difference between the virtual observation information and the real observation information.” Sugaya teaches (¶0023) an imaging device that takes images of a machine tool + various sensors to detect the operation/orientation of a machine tool (i.e., real observation information); (¶0025 and ¶0051) The computer 10 also acquires the image of the machine tool for the predetermined time while the computer 10 is acquiring the data on the operation of the machine tool. The computer 10 compares the acquired image with the generated CG for the predetermined time. The computer 10 judges if there is a difference between the acquired image and the generated CG by the comparison and detects an abnormality in the machine tool if the difference exists. Sugaya does not teach “estimate, as an unknown state, a state that is unknown or uncertain in a real environment, that is directly or indirectly estimatable from a real observation information obtained by observing the real environment, and that is a state should be considered to simulate the real environment” However, Yoneyama teaches (¶0033) a relative position of the virtual model S1 before correction (i.e., an state that is uncertain in a real environment) and a relative position of the virtual model S1 after correction. That the markers M are detected by the detecting means 16 such as a camera that is built into or provided separately from the augmented reality display device 15 such as a projector or head-mounted display, and the position of each object of the real equipment S2 and the relative position of each object with respect to the real robot, in other words the position of the virtual model S1 and the relative position of each object with respect to the virtual robot after correction, are calculated (in other words this means that the relative position) is directly or indirectly estimatable from real observation information (i.e., camera data) that simulates the real environment. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system for operation verification as taught by Sugaya with the determination relative position error from a relative position of the virtual model before correction and a relative position of the virtual model after correction as taught by Yoneyama for the benefit of automatically calibrating the model (¶0037.) Sugaya and Yoneyama do not teach “set the virtual environment obtained by simulating the real environment based on the real observation information and the unknown state in the real environment.” However, Rosenberg teaches (22:15-23:3) adjustments to the USM module 308 simulations bring the simulations back into alignment with actual performance. The equilibrating adjustment can temporarily adjust the USM module simulation to bring it into accordance with the robot's current state. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system for operation verification as taught by Sugaya and Yoneyama with the adjustments and fault correction as taught by Rosenberg for the benefit so that when the robot 100 employs the simulation to calculate its movements, the resulting predictions can achieve sufficient accuracy for the robot to function effectively. Regarding claim 4, “The information processing device according to claim 1, wherein the one or more processors are further configured to execute the instructions to: update at least one of the unknown state and a control plan for operating the target device based on a determination result of the abnormal state.” Rosenberg further teaches (14:7-49) The Fault Monitoring (FM) module 302 may be configured to detect any discrepancies between simulation data and real-time or near real-time sensor data. The FM module may be configured to control the process of processing and integrating data relevant to the possibility of fault, analyzing suggestive data, determining a probable fault, detecting specific evidence of a fault, analyzing the fault, and further determining an accurate cause for the fault. The FM module detects any discrepancy between the simulation (data obtained from the USM module 308) and sensor data (data obtained from the CSM module 310), the FM module may be configured to indicate or alert the detection of fault or a determination of a discrepancy to other modules in the system. The FM module 302 may be configured to obtain information associated with the detected fault. The information to be obtained may be based on a discrepancy or pattern of discrepancies that has been determined. For example, once fault is detected, the FM module may interact with one or more databases associated with the FM module, in order to obtain diagnosis methods related to the specific fault. If the detected fault concerns the arm joints of the robot, the FM module may be configured to obtain information regarding the diagnosis methods to analyze the causes of fault concerning the arm joints. The mapping between detected faults and various diagnosis methods may be updated by the FDD system 206. In some embodiments, the updates may be based on whether or not the diagnosis method applied for the detected fault accurately accounts for and/or resolves the fault. The association between the detected fault and the diagnosis methods may be strengthened or weakened based on the outcome of the diagnosis, and the parameters controlling the association may be tweaked or updated by the FDD system; (22:1-7) Once the cause of the anomaly is identified, the nature and magnitude of the fault can be determined Regarding claim 5, “The information processing device according to claim 4, wherein the one or more processors are further configured to execute the instructions to: repeat update of at least one of the unknown state and a control plan for operating the target device until a determination result of the abnormal state satisfies a predetermined criterion.” Rosenberg further teaches (14:7-49 and 2:4-20) The Fault Monitoring (FM) module 302 may be configured to detect any discrepancies between simulation data and real-time or near real-time sensor data. The FM module may be configured to control the process of processing and integrating data relevant to the possibility of fault, analyzing suggestive data, determining a probable fault, detecting specific evidence of a fault, analyzing the fault, and further determining an accurate cause for the fault. The FM module detects any discrepancy between the simulation (data obtained from the USM module 308) and sensor data (data obtained from the CSM module 310), the FM module may be configured to indicate or alert the detection of fault or a determination of a discrepancy to other modules in the system. The FM module 302 may be configured to obtain information associated with the detected fault. The information to be obtained may be based on a discrepancy or pattern of discrepancies that has been determined. For example, once fault is detected, the FM module may interact with one or more databases associated with the FM module, in order to obtain diagnosis methods related to the specific fault. If the detected fault concerns the arm joints of the robot, the FM module may be configured to obtain information regarding the diagnosis methods to analyze the causes of fault concerning the arm joints. The mapping between detected faults and various diagnosis methods may be updated by the FDD system 206. In some embodiments, the updates may be based on whether or not the diagnosis method applied for the detected fault accurately accounts for and/or resolves the fault. The association between the detected fault and the diagnosis methods may be strengthened or weakened based on the outcome of the diagnosis, and the parameters controlling the association may be tweaked or updated by the FDD system; (22:1-7) Once the cause of the anomaly is identified, the nature and magnitude of the fault can be determined; (22:15-23:3) adjustments to the USM module 308 simulations bring the simulations back into alignment with actual performance. The equilibrating adjustment can temporarily adjust the USM module simulation to bring it into accordance with the robot's current state; (18:25-57) analyzing Bayesian method to analyze discrepancies in light of past patterns. Regarding claim 6, “The information processing device according to claim 1, wherein the one or more processors are further configured to execute the instructions to: acquire, as the real observation information, image information obtained by observing the target device, generate, as the virtual observation information, image information of a same type as the real environment observed in the virtual environment, and determine an abnormal state of the target device based on the real observation information and the virtual observation information.” Sugaya teaches (¶0023) an imaging device that takes images of a machine tool + various sensors to detect the operation/orientation of a machine tool (i.e., real observation information); (¶0025 and ¶0051-¶0053) The computer 10 also acquires the image of the machine tool for the predetermined time while the computer 10 is acquiring the data on the operation of the machine tool. The computer 10 compares the acquired image with the generated CG for the predetermined time. The computer 10 judges if there is a difference between the acquired image and the generated CG by the comparison and detects an abnormality in the machine tool if the difference exists. Regarding claim 9, its rejection is similar to claim 1. Regarding claim 10, its rejection is similar to claim 1. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugaya, Yoneyama, and Rosenberg in view of Vogelsong et al. (US 10926408, hereinafter Vogelsong.) Regarding claim 7, “The information processing device according to claim 1, wherein the one or more processors are further configured to execute the instructions to: calculate a degree of deviation between the real observation information and the virtual observation information based on the difference.” Yoneyama further teaches (¶0010) error in the layout between the virtual space and the real space; (¶0027, ¶0033) calculates a relative position error from relative positions of the virtual models before correction and relative positions of the virtual models after correction. Sugaya, Yoneyama, and Rosenberg do not teach “set a reward according to the difference, wherein the reward is set lower as the degree of deviation is larger, and the reward is set higher as the degree of deviation is smaller; create a policy regarding an operation of the target device based on the reward; determine the operation of the target device according to the created policy; and cause the target device to perform the determined operation.” However, Vogelsong teaches (13:35-51) simulated trials are provided to the feedback engine 214, which generates success/reward scores or outputs comparison preferences indicating which of a number of performances was more successful. This can involve human judgment or can be automated. The evaluation from the feedback engine 214 guides the machine learning system 218 to generate and refine a robotic control policy for the task. The robotic control policy 236 is stored and then used during the next simulation of the task 101 in the simulated environment 230. The robotic control system 206 can repeat this loop until the robotic control policy 236 achieves the desired performance level within the simulated environment 230. The machine learning system 218 can implement the targeted update process 100 of FIGS. 1A and 1B using recorded observations 232 of simulated trials to iteratively update the policy until it achieves satisfactory performance in the simulated environment 305; (14:7-18) simulation occurs during real-world action; (17:37-67) The robotic control system 206 can assign a positive reward to the higher-performing observation, generate an update vector using reinforcement learning, and then update the network parameters via a gradient update by weighting the update vector with the change vector. This approach enables the robotic control system 206 to identify where and by how much policy A differed from policy B, and then leverage the feedback saying that policy A caused better performance in order to weight the updates to these areas more heavily. Where policy A and policy B were the same, no update may be applied. This approach logically presumes that the differences between policy A and policy B account for the superior performance of policy A, and so rewards these differences with more heavily weighted updates. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system for operation verification as taught by Sugaya, Yoneyama, and Rosenberg with the concept of reinforcement learning as taught by Vogelsong for the benefit of allowing systems to learn complex tasks or adapt to changing environments, and for increasing the likelihood that actions that yielded positive rewards will continue to occur . Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Huang et al. (US 20200171671) – (¶0490) The robot gets positive reward when it reaches the target and gets negative reward if it collides with an obstacle. Shiraishi et al. (US 20170031330) - (¶0062) a negative reward may be given according to its deviation amount. That is, the greater the deviation amount, the greater the negative reward may be given. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK J JOHNSON whose telephone number is (571)272-9629. The examiner can normally be reached 9:00AM-3:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian T. Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Frank Johnson/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Apr 20, 2023
Application Filed
Feb 27, 2025
Non-Final Rejection — §103
Jul 06, 2025
Response Filed
Aug 19, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597262
DETECTING AND IDENTIFYING OBJECTS REPRESENTED IN SENSOR DATA GENERATED BY MULTIPLE SENSOR SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12583386
METHOD FOR DETECTING TARGET PEDESTRIAN AROUND VEHICLE, METHOD FOR MOVING VEHICLE, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12575718
UNIVERSAL ENDOSCOPE ADAPTER
2y 5m to grant Granted Mar 17, 2026
Patent 12574588
Image Selection Using Motion Data
2y 5m to grant Granted Mar 10, 2026
Patent 12573219
DEVICE AND METHOD FOR COUNTING AND IDENTIFICATION OF BACTERIAL COLONIES USING HYPERSPECTRAL IMAGING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
77%
With Interview (+20.0%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 222 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month