Prosecution Insights
Last updated: April 19, 2026
Application No. 18/844,212

INFORMATION PROCESSING DEVICE AND CONTROLLER

Non-Final OA §102§103
Filed
Sep 05, 2024
Examiner
WALLACE, ZACHARY JOSEPH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kyocera Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
92%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
130 granted / 180 resolved
+20.2% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
13 currently pending
Career history
193
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
42.6%
+2.6% vs TC avg
§102
29.1%
-10.9% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 180 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/05/2024 has been considered and is in compliance with the provisions of 37 CFR 1.97. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-7and 11-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kitano et al. (US 2023/0249333; hereinafter Kitano). Regarding Claim 1: Kitano discloses an information processing device comprising: a display (Kitano, Para. [0034], Kitano discloses a display); and an inputter configured to receive a user input (Kitano, Para. [0032], Kitano discloses an inputter for receiving an input operation from a user), wherein when the display displays a first detection result of detecting at least one candidate target area including a candidate for a destination area of an operation target object and a second detection result of detecting at least one candidate object including a candidate for the operation target object in a work environment of the robot (Kitano, Para. [0034], [0040], [0049-0050], Kitano discloses displaying the starting and ending points for the target object within the work environment), the inputter is configured to receive a destination input specifying the destination area from the at least one candidate target area (Kitano, Para. [0046], [0049] Kitano discloses the inputter receives at least the workable area for the robotic device, including starting and ending points). Regarding Claim 2: Kitano discloses the information processing device according to claim 1. Kitano further discloses the destination input is input through an operation on a candidate object included in the at least one candidate object (Kitano, Para. [0026-0027], Kitano discloses the target object (i.e. substrate) is delivered to the destination through operations of the robotic device). Regarding Claim 3: Kitano discloses the information processing device according to claim 2. Kitano further discloses the destination input is input through an operation of dragging the candidate object to a candidate target area included in the at least one candidate target area and dropping the candidate object in the candidate target area (Kitano, Para. [0042], [0046], Kitano discloses a drag and drop operation as an input operation when instructing the movement trajectory of the robotic device). Regarding Claim 4: Kitano discloses the information processing device according to claim 1. Kitano further discloses the at least one candidate target area includes a candidate of a source area of the operation target object (Kitano, Para. [0026-0027], [0049-0050], Kitano discloses the ending point is the area in which the object is delivered), and the inputter is configured to receive a source input specifying the source area from the at least one candidate target area (Kitano, Para. [0045-0048], Kitano discloses the inputter is configured to receive the end point from the user input operation). Regarding Claim 5: Kitano discloses the information processing device according to claim 4. Kitano further discloses the source input is input through an operation on a candidate object included in the at least one candidate object (Kitano, Para. [0026-0027], Kitano discloses the target object (i.e. substrate) is delivered to the destination through operations of the robotic device based on the user input operations, see at least Para. [0049-0050]). Regarding Claim 6: Kitano discloses the information processing device according to claim 4. Kitano further discloses the display is configured to display, when the inputter receives the source input, a target object in the specified source area as the second detection result (Kitano, Para. [0046-0049] Kitano discloses the inputter receives at least the workable area for the robotic device, including starting and ending points). Regarding Claim 7: Kitano discloses the information processing device according to claim 1. Kitano further discloses the inputter is configured to receive a target object input specifying the operation target object from the at least one candidate object (Kitano, Para. [0045-0050, Kitano discloses the inputter is configured to receive the starting and ending points for the objects), and the target object input is input through at least one of the destination input or a source input specifying a source area of the operation target object (Kitano, Para. [0045-0050], Kitano discloses the object staring point and ending points are entered as an input by the user through the inputter). Regarding Claim 11: Kitano discloses the information processing device according to claim 1. Kitano further discloses the inputter is configured to receive a first program input specifying a first program to be used from a plurality of different first programs for detecting a plurality of different candidate target areas (Kitano, Para. [0007], [0035-0036], Kitano discloses the inputter selects a trajectory derivation program from among a plurality of programs within the teaching program). Regarding Claim 12: Kitano discloses the information processing device according to claim 11. Kitano further discloses the display is configured to display a candidate for at least one first program usable among the plurality of different first programs (Kitano, Para. [0034], [0038], [0041], Kitano discloses the display is configured to disclose the users input operation). Regarding Claim 13: Kitano discloses the information processing device according to claim 12. Kitano further discloses the candidate for the at least one first program is changed based on the work environment (Kitano, Para. [0036], [0046], Kitano discloses the work environment area is defined by the user input operation and the trajectory derivation program is changed based on the work area). Regarding Claim 14: Kitano discloses the information processing device according to claim 1. Kitano further discloses the inputter is configured to receive a second program input specifying a second program to be used from a plurality of different second programs for detecting a plurality of different candidate objects (Kitano, Para. [0041-0042], Kitano discloses the inputter is configured to receive user input operations cause the movement program to produce movements of the robot). Regarding Claim 15: Kitano discloses the information processing device according to claim 14. Kitano further discloses the display is configured to display a candidate for at least one second program usable among the plurality of different second programs (Kitano, Para. [0034], [0038], [0041], Kitano discloses the display is configured to disclose the users input operation). Regarding Claim 16: Kitano discloses the information processing device according to claim 15. Kitano further discloses wherein the candidate for the at least one second program is changed based on the work environment (Kitano, Para. [0036], [0046], Kitano discloses the work environment area is defined by the user input operation and the movement program is changed based on the work area). Regarding Claim 17: Kitano discloses the information processing device according to claim 1. Kitano further discloses the display is configured to display a work environment image of the work environment, and the work environment image includes at least part of the first detection result and the second detection result (Kitano, Para. [0034], [0040], [0049-0050], Kitano discloses the display is configured to display the work environment including the starting and ending points). Regarding Claim 18: Kitano discloses the information processing device according to claim 1. Kitano further discloses the display is configured to display, as the first detection result and the second detection result, list information listing the at least one candidate target area and the at least one candidate object (Kitano, Para. [0034], [0040], [0049-0050], Kitano discloses the display is configured to display the work environment including the starting and ending points). Regarding Claim 19: Kitano discloses the information processing device according to claim 1. Kitano further discloses a touchscreen display including the display and the inputter (Kitano, Para. [0032], Kitano discloses the inputter as a keyboard or mouse, and may even be configured as a touch screen, Para. [0002]). Regarding Claim 20: Kitano discloses the information processing device according to claim 1. Kitano further discloses a controller connectable to the display and the inputter included in the information processing device according to claim 1 to communicate with the display and the inputter, the controller being configured to process information to control a robot configured to transfer an operation target object in a source area to a destination area (Kitano, Para. [0021], Fig 2, Kitano discloses a control device configured to be connected to the display, and the control device performs operations on the work object, see at least Para. [0042], [0066]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Kitano in view of Dan et al. (US 2023/0373098; hereinafter Dan). Regarding Claim 8: Kitano discloses the information processing device according to claim 7. Kitano fails to explicitly disclose the second detection result includes a category result indicating a category of the candidate for the operation target object. However Dan, in the same field of endeavor of robotic controls, discloses the second detection result includes a category result indicating a category of the candidate for the operation target object (Dan, Para. [0012], Dan discloses displaying category results for the object type, such as block, pencil, screw, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kitano so as to include displaying a category type of the object as disclosed by Dan with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this combination in order to improve the interaction between the operator and the robot system, see at least Dan Para. [0013]. Regarding Claim 9: The combination of Kitano and Dan discloses the information processing device according to claim 8. Kitano fails to explicitly disclose the inputter is configured to receive a target object category input specifying the category of the operation target object as the target object input and a candidate object being the at least one candidate object and being in the category specified by the target object category input is specified as the operation target object. However Dan discloses the inputter is configured to receive a target object category input specifying the category of the operation target object as the target object input (Dan, Para. [0012], Dan discloses the user input is configured to receive object type information), and a candidate object being the at least one candidate object and being in the category specified by the target object category input is specified as the operation target object (Dan, Para. [0012], Dan discloses the user choses an object within the specified category type). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kitano so as to include an inputter configured to choose a category type of the object as disclosed by Dan with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this combination in order to improve the interaction between the operator and the robot system, see at least Dan Para. [0013]. Regarding Claim 10: The combination of Kitano and Dan discloses the information processing device according to claim 9. Kitano fails to explicitly disclose a candidate object being the at least one candidate object, being in the category specified by the target object category input, and being in the source area is specified as the operation target object. However Dan discloses a candidate object being the at least one candidate object, being in the category specified by the target object category input, and being in the source area is specified as the operation target object (Dan, Para. [0012], Dan discloses the object within the chosen category as being within view of the camera). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kitano so as to include the object as being within the chosen category as disclosed by Dan with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this combination in order to improve the interaction between the operator and the robot system, see at least Dan Para. [0013]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZACHARY JOSEPH WALLACE whose telephone number is (469)295-9087. The examiner can normally be reached 7:00 am - 5:00 pm, Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Z.J.W./Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Sep 05, 2024
Application Filed
Dec 12, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600038
MOTION CONTROL METHOD AND ROBOT
2y 5m to grant Granted Apr 14, 2026
Patent 12602029
METHOD AND SYSTEM FOR TASK PLANNING FOR VISUAL ROOM REARRANGEMENT UNDER PARTIAL OBSERVABILITY
2y 5m to grant Granted Apr 14, 2026
Patent 12589496
TRAJECTORY GENERATION SYSTEM, TRAJECTORY GENERATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12583465
METHOD FOR REDUNDANT MONITORING OF DRIVING FUNCTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12576882
Method and Device for Prioritizing Route Incidents
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
92%
With Interview (+20.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 180 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month