Prosecution Insights
Last updated: April 19, 2026
Application No. 18/324,476

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM STORING PROGRAM

Non-Final OA §103
Filed
May 26, 2023
Examiner
AFRIN, NAZIA
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kyushu University National University Corporation
OA Round
3 (Non-Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
57%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
4 granted / 10 resolved
-12.0% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
63 currently pending
Career history
73
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
60.7%
+20.7% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims Claim 4 is cancelled. Claims 1-3, 5-10 are pending. No new claim is added. Response to arguments Applicant’s amendments and arguments are entered. Applicant’s remarks are also entered into the record. A new search was made necessitated by the applicant’s amendments and remarks. A new reference was found. A new rejection is made herein. Applicant’s arguments are now moot in view of the new rejection of the claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-10 are rejected under 35 U.S.C. 103 as being unpatented over US11079897 to Yerli (herein after “Yerli”) in view of KR20200098509A to Wakabayashi (herein after “Wakabayashi”). Regarding claim 1, Yerli teaches an information processing device (see Yerli at least abstract A system and method enabling two-way interactive operations of real-time 3D virtual replicas and real objects are described) mobile devices that provide more user friendly and flexible Human Machine Interfaces) for communicating with an autonomous mobile device(see Yerli at least [column 1 lines 61-65] The current disclosure provides a system and method enabling a natural control and real-time 3D-based interaction with and between real objects via an interface through an accurate virtual replica of the real objects), the information processing device comprising: circuitry(see Yerli application specific integrated circuit ), and a memory storing computer-executable instructions that cause the circuitry to execute (see Yerli [column 4, lines 47-50] According to an embodiment, the servers may be provided as hardware and software including at least a processor and a memory), transmitting an operation command of an operator of the autonomous mobile device to the autonomous mobile device existing in a real world and a virtual mobile device simulating the autonomous mobile device and existing in a virtual world simulating the real world (see Yerli [column 14, lines 62-65] Thus, any bidirectional commands between real objects 102 and real-time 3D virtual replicas 104, or between real time 3D virtual replicas 104 and real objects 102 go through the persistent virtual world system 118); device detects an inhibitor inhibiting operability of the operator to operate the autonomous mobile device(see Yerli [column 8 ,lines 48-52] another example, a pizza delivery drone can use the virtual model of a city to find the desired destination, and may use the real visual sensors only to detect and avoid objects that may not be in the persistent virtual world system.), the real mode providing the operator with sensory feedback corresponding to an autonomous movement of the autonomous mobile device in the real world (see Yerli [column 8, lines 40-45] a physical visual sensor (e.g., cameras or optical sensors) is either failing or missing in the robot), and the virtual mode providing the operator with sensory feedback corresponding to a movement according to the operation command of the virtual mobile device in the virtual world (see Yerli [column 8, lines 53-62] manipulate a real-time 3D virtual replica of a surgical apparatus that has a real counterpart in a surgical room. Other staff (e.g., doctors, nurses, etc.) may view the virtual avatar of the doctor performing the surgery and may assist him as needed. In order to increase accuracy, cameras may capture the real patient and the operations room, which may be integrated in the virtual world version displayed to the remote doctor so that he can view in real-time the situation in the operation room). However, Yerli does not expressly or otherwise teach switching from a real mode to a virtual mode in response to detecting and after switching from the real mode to the virtual mode, in order to reduce at least one of a position deviation or a posture deviation between the autonomous mobile device and the virtual mobile device, adjusting a corresponding one of a position and a posture of the virtual mobile device by automatically reducing a corresponding one of a position change speed and a posture change speed of the virtual mobile device. Nevertheless, Wakabayashi same field of endeavor teaches switching from a real mode to a virtual mode in response to detecting (see Wakabayashi at least paras[0118],[0131],[0147]) and after switching from the real mode to the virtual mode, in order to reduce at least one of a position deviation or a posture deviation between the autonomous mobile device (see Wakabayashi para[0004]Thereby, it can be realized that the virtual object is superimposed on the setting position regardless of the position and posture of the HMD. Here, an error in position and posture information may occur due to a detection error of a sensor used to detect the position and posture of the HMD, paras[0006]-[0010], [0048], [0065]-[0067]) and the virtual mobile device, adjusting a corresponding one of a position and a posture of the virtual mobile device by automatically reducing a corresponding one of a position change speed and a posture change speed of the virtual mobile device (see Wakabayashi Abstract, paras[0110] when the change speed of the position or posture of the display device 1 is the first change speed, compared to the case where the change speed of the position or posture of the display device 1 is a second change speed smaller than the first change speed, [0136]-[0137], [0196], [0201]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified art Yerli’s system and method enabling two-way interactive operations of real-time 3D virtual replicas and real objects with Wakabayashi’s switching virtual to real mode based on adjusting position and posture information in order to allow to superimpose a computer in the real space based on positional and posture information indicating a position and posture of a moving object (See Wakabayashi para[0010]). Regarding claim 2, Yerli and Wakabayashi remain applied as claim 1. Yerti teaches the circuitry is caused to execute the autonomous mobile device autonomously avoids the inhibitor (see Yetri [column 8,lines 39-48] For example, in a scenario of a factory robot configured to transport materials from one place to another, if a physical visual sensor (e.g., cameras or optical sensors) is either failing or missing in the robot, the robot may use a virtual visual sensor by employing the virtual map of the factory comprising the 3D coordinates and 3D structure of each item in order to detect and accordingly avoid obstacles already located in the persistent virtual world system, such as walls, tables, or other real object). However, Yerli does not expressly or otherwise teach wherein after switching from the real mode to the virtual mode, the circuitry is caused to execute switching from the virtual mode to the real mode. Nevertheless, Wakabayashi same field of endeavor teaches wherein after switching from the real mode to the virtual mode, the circuitry is caused to execute switching from the virtual mode to the real mode (See switching the transition display control to normal display control in paras[1118], [0131],[0147],[0156]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified art Yerli’s system and method enabling two-way interactive operations of real-time 3D virtual replicas and real objects with Wakabayashi’s switching virtual to real mode based on adjusting position and posture information in order to allow to superimpose a computer in the real space based on positional and posture information indicating a position and posture of a moving object (See Wakabayashi para[0010]). Regarding claim 3, Yerli and Wakabayashi remain applied as claim 1. Yerli teaches circuitry is caused to execute controlling the movement of the virtual mobile device in accordance with the operation command (see Yetri [column 13, lines 51-55] also to enable an accurate control of the objects with 6 degrees of freedom via the real-time 3D virtual replica, which may be reflected as a life-like manipulation of the real object through the real-time 3D virtual replica.). However, Yerli does not expressly or otherwise teach wherein after switching from the real mode to the virtual mode, the circuitry is caused to execute switching from the virtual mode to the real mode. Nevertheless, Wakabayashi same field of endeavor teaches wherein after switching from the real mode to the virtual mode, the circuitry is caused to execute switching from the virtual mode to the real mode (See switching the transition display control to normal display control in paras[1118], [0131],[0147],[0156]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified art Yerli’s system and method enabling two-way interactive operations of real-time 3D virtual replicas and real objects with Wakabayashi’s switching virtual to real mode based on adjusting position and posture information in order to allow to superimpose a computer in the real space based on positional and posture information indicating a position and posture of a moving object (See Wakabayashi para[0010]). Regarding claim 5, Yerli and Krishnan remain applied as claim 1. Yerli teach the circuitry is caused to execute determining that the autonomous mobile device is required to autonomously avoid the inhibitor and requesting the autonomous mobile device to perform an autonomous movement for autonomously avoiding the inhibitor (see Yerli [column 8 , lines 43-47] the robot may use a virtual visual sensor by employing the virtual map of the factory comprising the 3D coordinates and 3D structure of each item in order to detect and accordingly avoid obstacles already located in the persistent virtual world system) . However, Yerli does not expressly or otherwise teach wherein after switching from the real mode to the virtual mode, the circuitry is caused to execute switching from the virtual mode to the real mode. Nevertheless, Wakabayashi same field of endeavor teaches wherein after switching from the real mode to the virtual mode, the circuitry is caused to execute switching from the virtual mode to the real mode (See switching the transition display control to normal display control in paras[1118], [0131],[0147],[0156]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified art Yerli’s system and method enabling two-way interactive operations of real-time 3D virtual replicas and real objects with Wakabayashi’s switching virtual to real mode based on adjusting position and posture information in order to allow to superimpose a computer in the real space based on positional and posture information indicating a position and posture of a moving object (See Wakabayashi para[0010]). Regarding claim 6, Yerli and Wakabayashi remain applied as claim 1. Yerti teaches, the circuitry is caused to execute determining that the autonomous mobile device is not required to autonomously avoid the inhibitor and switching from the virtual mode to the real mode. (see Yerli [column 8 ,lines 48-52] a pizza delivery drone can use the virtual model of a city to find the desired destination, and may use the real visual sensors only to detect and avoid objects that may not be in the persistent virtual world system). Regarding claim 7, Yerli and Wakabayashi remain applied as claim 1. Yerli teaches wherein setting a position where the inhibitor is not present on the future trajectory as a target position, and requesting the autonomous mobile device to perform an autonomous movement for moving to the target position while autonomously avoiding the inhibitor (see Yerli [column 27, lines 54-57] The processor 418 may be configured to process manipulation instructions directly input via the I/O module 402 or coming from the server 106 and send the processed instructions to actuators 410 for performing the required movements of effectors 412). Regarding claim 8, Yerli and Wakabayashi remain applied as claim 1. Yerli teaches when the autonomous mobile device detects that the autonomous movement is inhibited by the inhibitor in the real world (Yerli [column 13 , lines 12-15] in real time, process the selection and manipulation instructions and transmit the instructions to the respective actuators of the real object 102 ). Regarding claim 9, Yerli teaches an information processing system (see Yerli abstract(mobile devices that provide more user friendly and flexible Human Machine Interfaces) including an autonomous mobile device and an information processing device for communicating with the autonomous mobile device (see Yerli [column 1, lines 29-33] These workstations have, in recent times, been upgraded from fixed computers to mobile devices that provide more user friendly and flexible Human Machine Interfaces (HMI). mobile devices that provide more user friendly and flexible Human Machine Interfaces), the information processing system including: circuitry; and a memory storing computer-executable instructions that cause the circuitry to execute(see Yerli [column 4, lines 47-50] According to an embodiment, the servers may be provided as hardware and software including at least a processor and a memory), transmitting the operation command to the autonomous mobile device existing in a real world and a virtual mobile device existing in a virtual world simulating the real world and simulating the autonomous mobile device (see Yerli [column 14, lines 62-65] Thus, any bidirectional commands between real objects 102 and real-time 3D virtual replicas 104, or between real-time 3D virtual replicas 104 and real objects 102 go through the persistent virtual world system 118); with sensory feedback corresponding to a movement of the autonomous mobile device in the real world (see Yerli [column 8, lines 40-45] a physical visual sensor (e.g., cameras or optical sensors) is either failing or missing in the robot), an inhibitor inhibiting operability of the operator to operate the autonomous mobile device (see Yerli [column 8 ,lines 48-52] another example, a pizza delivery drone can use the virtual model of a city to find the desired destination, and may use the real visual sensors only to detect and avoid objects that may not be in the persistent virtual world system.), the real mode providing the operator with sensory feedback corresponding to an autonomous movement of the autonomous mobile device in the real world (see Yerli [column 8, lines 40-45] a physical visual sensor (e.g., cameras or optical sensors) is either failing or missing in the robot), , and the virtual mode providing the operator with sensory feedback corresponding to a movement according to the operation command of the virtual mobile device in the virtual world (see Yerli [column 8, lines 53-62] manipulate a real-time 3D virtual replica of a surgical apparatus that has a real counterpart in a surgical room. Other staff (e.g., doctors, nurses, etc.) may view the virtual avatar of the doctor performing the surgery and may assist him as needed. In order to increase accuracy, cameras may capture the real patient and the operations room, which may be integrated in the virtual world version displayed to the remote doctor so that he can view in real-time the situation in the operation room.). However, Yerli does not expressly or otherwise teach switching from a real mode to a virtual mode in response to detecting and after switching from the real mode to the virtual mode, in order to reduce at least one of a position deviation or a posture deviation between the autonomous mobile device and the virtual mobile device, adjusting a corresponding one of a position and a posture of the virtual mobile device by automatically reducing a corresponding one of a position change speed and a posture change speed of the virtual mobile device, controlling an operation of the autonomous mobile device in response to an operation command of an operator of the autonomous mobile device to the autonomous mobile device and an autonomous movement request. Nevertheless, Wakabayashi same field of endeavor teaches controlling an operation of the autonomous mobile device in response to an operation command of an operator of the autonomous mobile device to the autonomous mobile device and an autonomous movement request (see Wakabayashi [column 10, lines 50-55] user devices and real objects may refer to the same device. For example, a land vehicle may refer to a real object that can be controlled by a real or artificial intelligence user. However, the vehicle may include augmented reality user interfaces (e.g., on the windshield or windows) that can allow a user to interact with the vehicle, send commands to a self-driving artificial intelligence system, or even control the vehicle itself through such interfaces, thus allowing the car to act as a user device.) switching from a real mode to a virtual mode in response to detecting (see Wakabayashi at least paras[0118],[0131],[0147]) and after switching from the real mode to the virtual mode, in order to reduce at least one of a position deviation or a posture deviation between the autonomous mobile device (see Wakabayashi para[0004]Thereby, it can be realized that the virtual object is superimposed on the setting position regardless of the position and posture of the HMD. Here, an error in position and posture information may occur due to a detection error of a sensor used to detect the position and posture of the HMD, paras[0006]-[0010], [0048], [0065]-[0067]) and the virtual mobile device, adjusting a corresponding one of a position and a posture of the virtual mobile device by automatically reducing a corresponding one of a position change speed and a posture change speed of the virtual mobile device (see Wakabayashi Abstract, paras[0110] when the change speed of the position or posture of the display device 1 is the first change speed, compared to the case where the change speed of the position or posture of the display device 1 is a second change speed smaller than the first change speed, [0136]-[0137], [0196], [0201]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified art Yerli’s system and method enabling two-way interactive operations of real-time 3D virtual replicas and real objects with Wakabayashi’s switching virtual to real mode based on adjusting position and posture information in order to allow to superimpose a computer in the real space based on positional and posture information indicating a position and posture of a moving object (See Wakabayashi para[0010]). Regarding claim 10, Yerli teaches A non-transitory recording medium storing computer-executable instructions that cause circuitry of an information processing device(See Yerli abstract , [column 1, lines 30-33] mobile devices that provide more user friendly and flexible Human Machine Interfaces) for communicating with an autonomous mobile device to execute: transmitting an operation command of an operator of the autonomous mobile device to the autonomous mobile device existing in a real world and a virtual mobile device existing in a virtual world simulating the real world and simulating the autonomous mobile device(see Yerli [column 14, lines 62-65] Thus, any bidirectional commands between real objects 102 and real-time 3D virtual replicas 104, or between real-time 3D virtual replicas 104 and real objects 102 go through the persistent virtual world system 118); the real mode providing the operator with sensory feedback corresponding to an autonomous movement of the autonomous mobile device in the real world(see Yerli [column 8, lines 40-45] a physical visual sensor (e.g., cameras or optical sensors) is either failing or missing in the robot), and the virtual mode providing the operator with sensory feedback corresponding to a movement according to the operation command of the virtual mobile device in the virtual world (see Yerli [column 8, lines 52-58] manipulate a real-time 3D virtual replica of a surgical apparatus that has a real counterpart in a surgical room. Other staff (e.g., doctors, nurses, etc.) may view the virtual avatar of the doctor performing the surgery and may assist him as needed. In order to increase accuracy, cameras may capture the real patient and the operations room, which may be integrated in the virtual world version displayed to the remote doctor so that he can view in real-time the situation in the operation room). However, Yerli does not expressly or otherwise teach switching from a real mode to a virtual mode in response to detecting and after switching from the real mode to the virtual mode, in order to reduce at least one of a position deviation or a posture deviation between the autonomous mobile device and the virtual mobile device, adjusting a corresponding one of a position and a posture of the virtual mobile device by automatically reducing a corresponding one of a position change speed and a posture change speed of the virtual mobile device. Nevertheless, Wakabayashi same field of endeavor teaches switching from a real mode to a virtual mode in response to detecting (see Wakabayashi at least paras[0118],[0131],[0147]) and after switching from the real mode to the virtual mode, in order to reduce at least one of a position deviation or a posture deviation between the autonomous mobile device (see Wakabayashi para[0004]Thereby, it can be realized that the virtual object is superimposed on the setting position regardless of the position and posture of the HMD. Here, an error in position and posture information may occur due to a detection error of a sensor used to detect the position and posture of the HMD, paras[0006]-[0010], [0048], [0065]-[0067]) and the virtual mobile device, adjusting a corresponding one of a position and a posture of the virtual mobile device by automatically reducing a corresponding one of a position change speed and a posture change speed of the virtual mobile device (see Wakabayashi Abstract, paras[0110] when the change speed of the position or posture of the display device 1 is the first change speed, compared to the case where the change speed of the position or posture of the display device 1 is a second change speed smaller than the first change speed, [0136]-[0137], [0196], [0201]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified art Yerli’s system and method enabling two-way interactive operations of real-time 3D virtual replicas and real objects with Wakabayashi’s switching virtual to real mode based on adjusting position and posture information in order to allow to superimpose a computer in the real space based on positional and posture information indicating a position and posture of a moving object (See Wakabayashi para[0010]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAZIA AFRIN whose telephone number is (703)756-1175. The examiner can normally be reached Monday-Friday 7:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A Browne can be reached at 5712700151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NAZIA AFRIN/Examiner, Art Unit 3666 /SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Apr 03, 2025
Non-Final Rejection — §103
Jun 27, 2025
Response Filed
Sep 03, 2025
Final Rejection — §103
Dec 09, 2025
Request for Continued Examination
Dec 22, 2025
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600603
CRANE, CRANE CHARACTERISTIC CHANGE DETERMINATION DEVICE, AND CRANE CHARACTERISTIC CHANGE DETERMINATION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12585271
ACTIVE GEOFENCING SYSTEM AND METHOD FOR SEAMLESS AIRCRAFT OPERATIONS IN ALLOWABLE AIRSPACE REGIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12560927
NAVIGATION METHOD AND ROBOT THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
57%
With Interview (+16.7%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month