Prosecution Insights
Last updated: April 19, 2026
Application No. 18/038,808

AUGMENTED REALITY DISPLAY DEVICE AND AUGMENTED REALITY DISPLAY SYSTEM

Final Rejection §103
Filed
May 25, 2023
Examiner
ESTEVEZ, DAIRON
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
51%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
43 granted / 64 resolved
+15.2% vs TC avg
Minimal -16% lift
Without
With
+-15.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
28 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 64 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 12/16/2025 has been entered. Claims 1 and 5-7 remain pending in the application. Response to Arguments Applicant argues that Mizutani does not disclose or suggest adopting a display form that alerts the operator according to the danger level to the operator. This analysis of the reference by Mizutani is erroneous, and Applicant’s arguments are not persuasive. The rejection is further defined below, but in general, Mizutani certainly teaches that the changes to the display form alert the user of the augmented reality display deice according to a danger level posed to the user. FIGs. 9-13 of Mizutani and the associated description show the ability to change the color of the representation of the motion space and even the transparency of the superimposed displayed motion space to alert the user, and allow them to recognize their proximity to the robot’s moving range. In P [0124] Mizutani rationalizes that these configurations are made so that “the worker 201 can recognize the motion space of the movable robot 202 three-dimensionally as well as intuitively, thus can avoid a danger such as a collision.” It is also important to note that Applicant alleges that Mizutani merely teaches that a size of a displaying of a motion area of a robot changes proportionally to a distance between the robot and the operator. Distance is understandably a primary concern of Mizutani, as the closer an operator gets, the greater the risk of collision and injury, as seen in P [0124]. This is not dissimilar to P [0024] of the Specification of the presently filed application, which states “The display control unit 215 may change the display form of the AR image of the motion area of the robot 10 based on the distance between the robot 10 and the augmented reality display device 20 calculated by the distance calculation unit 213”. In other words, the support for the claim amendment identifies distances that establish a level of risk associated with the operator’s position. This is closely related to the disclosure of Mizutani. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: display control unit, coordinate acquisition unit, information acquisition unit, distance calculation unit, communication unit, and input unit in claims 1 and 5. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof, see P [0018] for CPU processor that includes display control unit, coordinate acquisition unit, information acquisition unit, and distance calculation unit. See also transmission devices in P [0017] for communication unit and touch panel in P [0014] for input unit. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f). Examiner Note: To reiterate from the Non-Final Office action, Claim 1 recites the limitation “based on matching performed between feature quantities from the image of the robot captured by the camera and feature quantities of predetermined three-dimensional recognition models of the robot". The term "feature quantities" appears to broadly describe known image processing features of a robot such as edges, joints, links, or attached markers, and it is understood that one of ordinary skill in the art would view identification of such features as meeting the limitation. Additionally, the term "predetermined three-dimensional recognition models" would appear to one of ordinary skill in the art to include any three dimensional model of the robot. Therefore, the limitation describes matching features, such as edges, links, joints, or markers, from a captured image with a previously stored model of the robot and its corresponding features. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1 and 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kazi et al., hereinafter Kazi (Document ID: US 20040189631 A1) in view of Funk et al., hereinafter Funk (Document ID: DE 102018113336 A1), and further in view of Mizutani et al., hereinafter Mizutani (Document ID: US 20110311127 A1). Regarding claim 1, Kazi teaches an augmented reality display device, comprising: a camera (camera 1.2); a display unit (viewing device 1.1); a display control unit (image generating unit 5.1) configured to display, on the display unit, an image of a robot captured by the camera and an augmented reality image of a motion area of the robot (see at least P [0085]: “an image generating unit 5.1, with evaluates the pose of the viewing unit 1.1 in accordance with the camera 1.2 and mixes the camera image with the robot-specific, computer-generated information, so that the real image and the information to be displayed are jointly displayed on the viewing device 1.1.”); Kazi additionally teaches a coordinate acquisition unit as a processing unit 5, configured to acquire three-dimensional coordinates of a robot origin in a world coordinate system in at least P [0093] wherein an origin of the world coordinate system for the robot base is acquired based on the image of the real robot 7 in its environment. But Kazi does not explicitly teach that the origin is based on matching performed between feature quantities from the image of the robot captured by the camera and feature quantities of predetermined three-dimensional recognition models of the robot. Instead, Funk, whose invention pertains to calibrating an augment reality spatial representation of a robot environment, teaches in at least P [0069]-[0071] an “initial calibration” which “includes performing a coordinate comparison between the robot coordinate system of the robot 20 and the coordinate system of the augmented reality display environment 22.” As seen in FIG. 2 this comparison step includes matching the image of the real robot 20 with the model of the robot 22. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the augmented reality motion area viewing system of Kazi with the calibration step involving comparison between a model and real time image of Funk in order to display the augmented reality display environment 22 in the correct position, size and orientation as in P [0054]. Kazi further teaches spatial awareness and shifting the reference coordinate system for a robot tool center point or a workpiece to be interacted with in P [0093]-[0098]. Kazi additionally teaches a processing unit 5 as an information acquisition unit, and Funk teaches a depth camera 18 capable of determining relative position to “the base robot coordinate system” (Funk P [0062]). But Kazi and Funk do not explicitly teach an information acquisition unit configured to acquire three-dimensional coordinates of the camera; and a distance calculation unit configured to calculate a distance between the robot and the augmented reality display device based on the three-dimensional coordinates of the robot origin and the three-dimensional coordinates of the camera Instead, Mizutani, whose invention pertains to a motion space presentation device, teaches in at least P [0085] a position and posture detection unit 105 that “detects a position and a posture of the image capture unit 104 in the real world”, wherein the image capture unit 104 is a camera. See also P [0097] wherein the distance between the worker, who holds the motion space presentation device 100 equipped with the image capture unit 104, is computed. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the augmented reality motion area viewing system with calibration of Kazi and Funk with the distance based awareness and functionality of Mizutani in order to distinguish the robot motion space from the rest of the environment in a user friendly and appropriate manner, as in Mizutani P [0099]. In view of the modification, Kazi then teaches a communication unit configured to communicate with an external device (see at least P [0088] which defines capabilities of the processing unit to utilize wireless, network based communication paths with a “manual programmer” as a communication unit, and see “robot control 6” as an external device for controlling the robot), wherein the information acquisition unit acquires setting information indicating the motion area of the robot from the external device (see at least P [0122] for defining operating areas “which the robot must not penetrate or must not leave.” Thus the information acquisition unit--- processing unit 5--- acquires setting information--- operating area--- indicating the motion area of the robot from the external device--- the manual programmer communicates with the robot control): wherein the display control unit arranges the augmented reality image of the motion area of the robot on the image of the robot with respect to the three-dimensional coordinates of the robot origin acquired by the coordinate acquisition unit and displays the image on the display unit (see at least P [0085] which establishes mixing “the camera image with the robot-specific, computer-generated information”, including the robot origin in P [0093] and FIGs. 7a-7c), But Kazi and Funk do not explicitly teach wherein the display control unit changes a display form of the motion area of the robot to a form that alerts a user of the augmented reality display device according to danger level to the user related to the calculated distance between the robot and the augmented reality display device. Instead, Mizutani teaches in at least P [0098] that the display form of the motion area is adapted based on the calculated distance. Mizutani additionally teaches in FIGs. 9-13 and P [0123]-[0136] the ability to change the color of the representation of the motion space and even the transparency of the superimposed displayed motion space to alert the user, and allow them to recognize their proximity to the robot’s moving range. It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the augmented reality motion area viewing system with calibration of Kazi and Funk with the distance based awareness and functionality of Mizutani in order to distinguish the robot motion space from the rest of the environment in a user friendly and appropriate manner, as in Mizutani P [0099]. In P [0124] Mizutani rationalizes that these configurations are made so that “the worker 201 can recognize the motion space of the movable robot 202 three-dimensionally as well as intuitively, thus can avoid a danger such as a collision.” Regarding claim 5, modified Kazi teaches the augmented reality display device according to claim 1, and Kazi further teaches an input unit configured to receive an input from a user (see at least P [0087]: “the processing unit 5 can also incorporate an interface for input devices, such as e.g. a manual programmer for a robot, which allow a spatial manipulation of the robot-specific information by means of a human user.”), wherein the information acquisition unit acquires setting information indicating the motion area of the robot from the user via the input unit (see at least P [0087]: “robot-specific information, optionally whilst taking account of user inputs, are used in augmented reality models, which are in turn further processed by the image generating unit 5.1 for displaying the augmented image on the viewing device 1.1”). Regarding claim 6, modified Kazi teaches the augmented reality display device according to claim 1, and Kazi further teaches wherein the information acquisition unit acquires at least next target position coordinates of the robot (see at least P [0076] wherein the path of target points for the robot is calculated), and wherein the display control unit displays, on the display unit, an augmented reality image of a motion trajectory up to the next target position coordinates together with the augmented reality image of the motion area of the robot (see at least FIGs. 11-14 for examples of displaying an augmented reality image of a motion trajectory up to the next target position coordinates together with the augmented reality image of the motion area of the robot). Regarding claim 7, modified Kazi teaches the augmented reality display device according to claim 1, and Kazi further teaches an augmented reality display system, comprising: a robot (robot 7); and the augmented reality display device (inventive device 1) according to claim 1. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Additional art made of record and not relied upon is considered pertinent to applicant's disclosure. Document ID: US 20160207198 A1 Invention pertains to verifying one or more safety volumes for a movable mechanical unit positioned in an environment. Document ID: US 20190187477 A1 Invention pertains to changing a display based on a determined risk level for preventing interference. Document ID: US 20170087722 A1 Invention pertains to modeling a physical scene with a virtual scene in a simulation environment for safety in a collaborative environment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Dairon Estevez whose telephone number is (703)756-4552. The examiner can normally be reached M-F 8:00AM - 4:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.E./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Jan 30, 2025
Non-Final Rejection — §103
Mar 31, 2025
Response Filed
May 05, 2025
Final Rejection — §103
Aug 04, 2025
Response after Non-Final Action
Aug 27, 2025
Request for Continued Examination
Sep 05, 2025
Response after Non-Final Action
Sep 12, 2025
Non-Final Rejection — §103
Dec 16, 2025
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594681
EXTERNAL ROBOT STAND AND EXTERNAL ROBOT SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12590806
SYSTEM-LEVEL OPTIMIZATION AND MODE SUGGESTION PLATFORM FOR TRANSPORTATION TRIPS
2y 5m to grant Granted Mar 31, 2026
Patent 12569997
METHOD OF GENERATING ROBOT PATH AND COMPUTING DEVICE FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12559139
AUTONOMOUS DRIVING CONTROL APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555467
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND MOBILE DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
51%
With Interview (-15.9%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 64 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month