Prosecution Insights
Last updated: April 19, 2026
Application No. 18/276,643

DISPLAY APPARATUS, COMMUNICATION SYSTEM, DISPLAY CONTROL METHOD, AND RECORDING MEDIUM

Non-Final OA §103
Filed
Aug 10, 2023
Examiner
DOUGLAS, SHANE EMANUEL
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ricoh Company Ltd.
OA Round
3 (Non-Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
2y 4m
To Grant
39%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
2 granted / 12 resolved
-35.3% vs TC avg
Strong +22% interview lift
Without
With
+22.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
44 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
30.3%
-9.7% vs TC avg
§112
2.5%
-37.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments This action is in response to amendments and remarks filed on 09/15/2025. Claims 24-37, 39-41, 43, and 44 are pending. Claims are 24, 39, 40, and 41 are amended. Claim 44 are new. Applicant's amendment necessitated new grounds of rejection rendering claims 24-37, 39-41, 43, and 44 rejected. Response to Arguments Applicant presents the following arguments regarding the previous office action: Allard does not teach or suggest a system where the mobile apparatus automatically captures and registers images. Furthermore, Allard provides no disclosure of triggering any image capture based on the specific criteria recited in the claim, such as a preset task having been performed, the mobile apparatus having stopped moving, or the detection of an intersection. Courbon does not teach triggering image capture when a preset task has been performed, the mobile apparatus has stopped moving, or an intersection has been detected near the mobile apparatus. Regarding the Applicant’s argument; the argument has been fully considered and is moot in light of new grounds for rejection below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 24, 29, 31-33, 35, 37, and 39-40, are rejected under 35 U.S.C. 103 as being unpatentable over Allard et al (US 20030216834A1) in view of Denuelle et al. (Snapshot-based Navigation for the Guidance of UAS), further in view of Lee at al (An implementation of the procedural reasoning system for multirobot applications), further in view of Peerless et al. (ExR-1 Robot Operating Guide). Regarding claim 24, Allard discloses a display apparatus, comprising circuitry (Abstract, a system for tele-operating a robot in an environment includes a user interface for controlling the tele-operation of the robot, an imaging device associated with the robot for providing image information representative of the environment around the robot), and (iv) the mobile apparatus being provided in a first site is different from a second site in which the display apparatus is provided (Abstract, a system for tele-operating a robot in an environment includes a user interface for controlling the tele-operation of the robot) … (an imaging device associated with the robot for providing image information representative of the environment around the robot), display, on a display, the plurality of captured images for selection by a user (0045, the heads-up display 310 continuously shows the most recent camera image received from the robot. In the preferred embodiment, a number of computer-generated images are overlaid on top of the camera image), receive selection of one or more captured images from among the plurality of captured images being displayed (0069, as the user moves the cursor arrow 405 within the heads-up display 310, the user interface constantly redraws the targeting circles 410 and 412 and the perspective box 430 corresponding to the location of the cursor arrow 405. As the user moves the cursor around the heads-up display, the user is able to choose a waypoint), and perform a predetermined operation with respect to the mobile apparatus, (0070, once the waypoint 460 has been selected, the waypoint is added to the set of current waypoint drive targets) …(0070, if the waypoint drive list was empty prior to the recent selection and the robot is in drive mode, then the robot will begin to drive towards that waypoint), wherein on a travel route through which the mobile apparatus autonomously moves is set based on one or more selected captured images (0070, once the waypoint 460 has been selected, the waypoint is added to the set of current waypoint drive targets and the targeting circles 410 and 412 are shaded. If the waypoint is the only current waypoint (or the waypoint at the top of the waypoint list), the robot begins to move toward the selected waypoint 460). However, Allard does not explicitly disclose, acquire a plurality of captured images, (i) each of the captured images having been automatically captured and registered previously by a mobile apparatus as a candidate of a movement destination of the mobile apparatus (ii) the automatic capture being triggered based on a predetermined criteria, (iii) the predetermined criteria selected from each of a preset task having been performed, the mobile apparatus having stopped moving, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Denuelle who is in the same field of endeavor of snapshot navigation discloses, acquire a plurality of captured images, (Abstract, the guidance of rotorcraft unmanned aerial systems (UAS). In this approach, a sequence of panoramic snapshots is stored to build a visual route between the home and the goal locations), (i) each of the captured images having been automatically captured and registered previously by a mobile apparatus as a candidate of a movement destination of the mobile apparatus (Abstract, a sequence of panoramic snapshots is stored to build a visual route between the home and the goal locations. Navigation back to the initial location consists in reaching each of the memorised snapshot positions, by performing successive local homing steps. For that purpose, a snapshot-based method is used to estimate the rotorcraft’s 3D position and velocity in real-time), and (ii) the automatic capture being triggered based on a predetermined criteria, (3.2 Visual Route Description, A new snapshot is added to the visual route every time that the estimated 3D travelled distance between the current frame and the last memorised snapshot becomes greater than the distance d) … (3 Route Learning and Long-range Homing, Both modules make use of a snapshot-based method which computes optic flow to estimate the rotorcraft’s egomotion relative to a memorised panoramic snapshot. This egomotion information is then used to either select whether the current camera frame should be memorised as a new snapshot waypoint (route learning)). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Allard, to incorporate Denuelle’s teachings for the benefit of having induvial snapshots taken at various points in a robot or vehicles travel, so a user can choose from the variety of snapshots where a the vehicle or robot should travel to as a destination. However, Denuelle still does not explicitly disclose, (iii) the predetermined criteria selected from each of a preset task having been performed, the mobile apparatus having stopped moving, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Lee who is in the same field of endeavor of implementing procedural reasoning system for autonomous navigation and robot applications discloses, (iii) the predetermined criteria selected from each of a preset task having been performed (Appendix: Example system, If the vehicle has reached the cone, then UM-PRS will issue an Off Road behavior. UM-PRS will wait until the vehicle has stopped and reached the end point. When the vehicle has reached the end point, then the demo is done), and the mobile apparatus having stopped moving, (Appendix: Example system, UM-PRS will wait until the vehicle has stopped and reached the end point. When the vehicle has reached the end point, then the demo is done). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Allard and Denuelle to incorporate Lee’s teachings for the benefit of having set points and events in the robots logic that tell the robot to snap a snapshot if its surroundings. However, Lee still does not explicitly disclose, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Peerless who is in the same field of endeavor of logic and procedural reasoning for robots discloses, an intersection having been detected near the mobile apparatus, (5.3. Mission Editor, All inspection actions (e.g. Photo) can also be defined and executed at junction waypoints) and a direction indicated by a manual operation command transmitted from the display apparatus having been changed, (10.3, Sensing, Forward facing 18 MP digital video camera with live video transmission to web-browser interface) … (5.3 Mission Editor, Rotate the robot (and elevate the camera) using the left/right icons on the mission editor until the sensor is pointing at the POI … Tick the action to be performed. 8. Frame any image using drag and drop and then take the picture) … (5.2 Robot Control, Manual Control – the robot is being controlled from this control station by a driver). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Allard, Denuelle, and Lee to incorporate Denuelle’s teachings for the benefit of having set points and events in the robots logic that tell the robot to snap a snapshot if its surroundings. Regarding claim 29, Allard, Denuelle, Lee, and Peerless disclose the display apparatus of claim 24, as discussed supra. Additionally, Allard discloses, at least one of the captured image is a spherical image of the first site where the mobile apparatus is provided, (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 19, Lines 1-5, in certain embodiments, the user interface 300 may include one or more panorama displays 330, as seen in FIG. 3 and FIG. 6. In certain embodiments of a robot 100, a camera such as an omnicam is rotatably mounted on the robot and is able to capture images in 360 degrees without requiring the robot to turn in place. In other embodiments the robot can turn in place in order to capture 360 degree images), and the circuitry is further configured to receive selection of a position on the spherical image being displayed (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 20, Lines 9-13, and/or obstacle avoidance the robot is to use while seeking a target or waypoint. The modes can control either the robot motion, or the interpretation of user input within the user interface. One user interface mode would be to interpret mouse clicks as commands to the pan/tilt camera instead of commands to create new waypoints), and generate autonomous movement request information including information on the position on the spherical image, which causes the mobile apparatus to autonomously move to an area indicated by the position on the spherical image that is selected (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 0070, Lines 1-7, if the waypoint is the only current waypoint (or the waypoint at the top of the waypoint list), the robot begins to move toward the selected waypoint 460. In other words, if the waypoint drive list was empty prior to the recent selection and the robot is in drive mode, then the robot will begin to drive towards that waypoint. If an additional selection is made, a second waypoint may be added to the list. As the robot gets to a waypoint, that waypoint will disappear from the heads-up display. If there are further waypoints in the current waypoint list, then the robot will immediately begin driving towards the second waypoint). Regarding claim 31, Allard, Denuelle, Lee, and Peerless disclose, the display apparatus of claim 24 as discussed supra. Additionally, Allard discloses the circuitry is further configured to further display, for at least one of the one or more selected captured images, additional information indicating a characteristic of an object in the at least one of the one or more selected captured image (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 16, Lines 7-8, this map can include waypoints and additional information such as architectural features such as a wall 344, previous path(s) traveled, direction vectors, etc.) … (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 20, Lines 5-8, in other embodiments, a dedicated portion of the user interface can be used to store selected camera (non-panoramic) images. FIG. 6 shows a sample panoramic view, including a doorway and a lamp 335. These same features are visible in sonar images shown in FIG. 7, which provides an indication of the relationship between the global locations of objects (as in FIG. 7) and their appearance in a panoramic view). Regarding claim 32, Allard, Denuelle, Lee, and Peerless disclose, the display apparatus of claim 24 as discussed supra. Additionally, Allard discloses the circuitry is further configured to display a current location of the mobile apparatus that has started to autonomously move to an area indicated by the at least one of the one or more selected captured images (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 16, Lines 1-8, the preferred embodiment of the user interface 300 includes an overhead map 340. The overhead map 340 contains a representation of the robot 342 and additional graphical information about the robot's immediate surroundings. This display can either be a world-orientation based map (i.e. the robot rotates within it) or, as shown in FIG. 7, a map relative to the robot's orientation (i.e. the robot 342 always appears to be pointed in the same direction). This map can include waypoints and additional information such as architectural features such as a wall 344, previous path(s) traveled, direction vectors, etc.). Regarding claim 33, Allard, Denuelle, Lee, and Peerless disclose, the display apparatus of claim 24 as discussed supra. Additionally, Allard discloses the circuitry is further configured to receive information on the site where the mobile apparatus is provided, and acquire the plurality of captured images specific to the first site where the mobile apparatus is provided (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 20, Lines 1-10, an area of the user interface may be dedicated to displaying panoramic views. Each panorama image is actually a sequence of photographs from the camera displayed in close proximity. In certain embodiments, the user may request the robot to capture a panoramic image. In other embodiments, a dedicated portion of the user interface can be used to store selected camera (non-panoramic) images. FIG. 6 shows a sample panoramic view, including a doorway and a lamp 335. These same features are visible in sonar images shown in FIG. 7, which provides an indication of the relationship between the global locations of objects (as in FIG. 7) and their appearance in a panoramic view) … (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 16, Lines 7-8, this map can include waypoints and additional information such as architectural features such as a wall 344, previous path(s) traveled, direction vectors, etc.). Regarding claim 37, Allard, Denuelle, Lee, and Peerless disclose, the communication system of claim 35 as discussed supra. Additionally, Allard discloses a memory to store the plurality of captured images which were captured by the mobile apparatus, wherein the circuitry of the display apparatus is further configured to obtain the plurality of captured images from the memory (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 20, Lines 1-10, an area of the user interface may be dedicated to displaying panoramic views. Each panorama image is actually a sequence of photographs from the camera displayed in close proximity. In certain embodiments, the user may request the robot to capture a panoramic image. In other embodiments, a dedicated portion of the user interface can be used to store selected camera (non-panoramic) images). Regarding claim 39. Allard discloses a display apparatus, comprising circuitry (Abstract, a system for tele-operating a robot in an environment includes a user interface for controlling the tele-operation of the robot, an imaging device associated with the robot for providing image information representative of the environment around the robot), and (iv) the mobile apparatus being provided in a first site is different from a second site in which the display apparatus is provided (Abstract, a system for tele-operating a robot in an environment includes a user interface for controlling the tele-operation of the robot) … (an imaging device associated with the robot for providing image information representative of the environment around the robot), display, on a display, the plurality of captured images for selection by a user (0045, the heads-up display 310 continuously shows the most recent camera image received from the robot. In the preferred embodiment, a number of computer-generated images are overlaid on top of the camera image), receive selection of one or more captured images from among the plurality of captured images being displayed (0069, as the user moves the cursor arrow 405 within the heads-up display 310, the user interface constantly redraws the targeting circles 410 and 412 and the perspective box 430 corresponding to the location of the cursor arrow 405. As the user moves the cursor around the heads-up display, the user is able to choose a waypoint), and perform a predetermined operation with respect to the mobile apparatus, (0070, once the waypoint 460 has been selected, the waypoint is added to the set of current waypoint drive targets) …(0070, if the waypoint drive list was empty prior to the recent selection and the robot is in drive mode, then the robot will begin to drive towards that waypoint), wherein on a travel route through which the mobile apparatus autonomously moves is set based on one or more selected captured images (0070, once the waypoint 460 has been selected, the waypoint is added to the set of current waypoint drive targets and the targeting circles 410 and 412 are shaded. If the waypoint is the only current waypoint (or the waypoint at the top of the waypoint list), the robot begins to move toward the selected waypoint 460). However, Allard does not explicitly disclose, acquire a plurality of captured images, (i) each of the captured images having been automatically captured and registered previously by a mobile apparatus as a candidate of a movement destination of the mobile apparatus (ii) the automatic capture being triggered based on a predetermined criteria, (iii) the predetermined criteria selected from each of a preset task having been performed, the mobile apparatus having stopped moving, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Denuelle who is in the same field of endeavor of snapshot navigation discloses, a method, comprising: acquire a plurality of captured images, (Abstract, the guidance of rotorcraft unmanned aerial systems (UAS). In this approach, a sequence of panoramic snapshots is stored to build a visual route between the home and the goal locations), (i) each of the captured images having been automatically captured and registered previously by a mobile apparatus as a candidate of a movement destination of the mobile apparatus (Abstract, a sequence of panoramic snapshots is stored to build a visual route between the home and the goal locations. Navigation back to the initial location consists in reaching each of the memorised snapshot positions, by performing successive local homing steps. For that purpose, a snapshot-based method is used to estimate the rotorcraft’s 3D position and velocity in real-time), and (ii) the automatic capture being triggered based on a predetermined criteria, (3.2 Visual Route Description, A new snapshot is added to the visual route every time that the estimated 3D travelled distance between the current frame and the last memorised snapshot becomes greater than the distance d) … (3 Route Learning and Long-range Homing, Both modules make use of a snapshot-based method which computes optic flow to estimate the rotorcraft’s egomotion relative to a memorised panoramic snapshot. This egomotion information is then used to either select whether the current camera frame should be memorised as a new snapshot waypoint (route learning)). However, Denuelle still does not explicitly disclose, (iii) the predetermined criteria selected from each of a preset task having been performed, the mobile apparatus having stopped moving, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Lee who is in the same field of endeavor of implementing procedural reasoning system for autonomous navigation and robot applications discloses, (iii) the predetermined criteria selected from each of a preset task having been performed (Appendix: Example system, If the vehicle has reached the cone, then UM-PRS will issue an Off Road behavior. UM-PRS will wait until the vehicle has stopped and reached the end point. When the vehicle has reached the end point, then the demo is done), and the mobile apparatus having stopped moving, (Appendix: Example system, UM-PRS will wait until the vehicle has stopped and reached the end point. When the vehicle has reached the end point, then the demo is done). However, Lee still does not explicitly disclose, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Peerless who is in the same field of endeavor of logic and procedural reasoning for robots discloses, an intersection having been detected near the mobile apparatus, (5.3. Mission Editor, All inspection actions (e.g. Photo) can also be defined and executed at junction waypoints) and a direction indicated by a manual operation command transmitted from the display apparatus having been changed, (10.3, Sensing, Forward facing 18 MP digital video camera with live video transmission to web-browser interface) … (5.3 Mission Editor, Rotate the robot (and elevate the camera) using the left/right icons on the mission editor until the sensor is pointing at the POI … Tick the action to be performed. 8. Frame any image using drag and drop and then take the picture) … (5.2 Robot Control, Manual Control – the robot is being controlled from this control station by a driver). Regarding claim 40, Allard discloses a display apparatus, comprising circuitry (Abstract, a system for tele-operating a robot in an environment includes a user interface for controlling the tele-operation of the robot, an imaging device associated with the robot for providing image information representative of the environment around the robot), and (iv) the mobile apparatus being provided in a first site is different from a second site in which the display apparatus is provided (Abstract, a system for tele-operating a robot in an environment includes a user interface for controlling the tele-operation of the robot) … (an imaging device associated with the robot for providing image information representative of the environment around the robot), display, on a display, the plurality of captured images for selection by a user (0045, the heads-up display 310 continuously shows the most recent camera image received from the robot. In the preferred embodiment, a number of computer-generated images are overlaid on top of the camera image), receive selection of one or more captured images from among the plurality of captured images being displayed (0069, as the user moves the cursor arrow 405 within the heads-up display 310, the user interface constantly redraws the targeting circles 410 and 412 and the perspective box 430 corresponding to the location of the cursor arrow 405. As the user moves the cursor around the heads-up display, the user is able to choose a waypoint), and perform a predetermined operation with respect to the mobile apparatus, (0070, once the waypoint 460 has been selected, the waypoint is added to the set of current waypoint drive targets) …(0070, if the waypoint drive list was empty prior to the recent selection and the robot is in drive mode, then the robot will begin to drive towards that waypoint), wherein on a travel route through which the mobile apparatus autonomously moves is set based on one or more selected captured images (0070, once the waypoint 460 has been selected, the waypoint is added to the set of current waypoint drive targets and the targeting circles 410 and 412 are shaded. If the waypoint is the only current waypoint (or the waypoint at the top of the waypoint list), the robot begins to move toward the selected waypoint 460). However, Allard does not explicitly disclose, acquire a plurality of captured images, (i) each of the captured images having been automatically captured and registered previously by a mobile apparatus as a candidate of a movement destination of the mobile apparatus (ii) the automatic capture being triggered based on a predetermined criteria, (iii) the predetermined criteria selected from each of a preset task having been performed, the mobile apparatus having stopped moving, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Denuelle who is in the same field of endeavor of snapshot navigation discloses, a non-transitory recording medium storing computer readable code for causing a computer system to execute a method, the method comprising: acquire a plurality of captured images, (Abstract, the guidance of rotorcraft unmanned aerial systems (UAS). In this approach, a sequence of panoramic snapshots is stored to build a visual route between the home and the goal locations), (i) each of the captured images having been automatically captured and registered previously by a mobile apparatus as a candidate of a movement destination of the mobile apparatus (Abstract, a sequence of panoramic snapshots is stored to build a visual route between the home and the goal locations. Navigation back to the initial location consists in reaching each of the memorised snapshot positions, by performing successive local homing steps. For that purpose, a snapshot-based method is used to estimate the rotorcraft’s 3D position and velocity in real-time), and (ii) the automatic capture being triggered based on a predetermined criteria, (3.2 Visual Route Description, A new snapshot is added to the visual route every time that the estimated 3D travelled distance between the current frame and the last memorised snapshot becomes greater than the distance d) … (3 Route Learning and Long-range Homing, Both modules make use of a snapshot-based method which computes optic flow to estimate the rotorcraft’s egomotion relative to a memorised panoramic snapshot. This egomotion information is then used to either select whether the current camera frame should be memorised as a new snapshot waypoint (route learning)). However, Denuelle still does not explicitly disclose, (iii) the predetermined criteria selected from each of a preset task having been performed, the mobile apparatus having stopped moving, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Lee who is in the same field of endeavor of implementing procedural reasoning system for autonomous navigation and robot applications discloses, (iii) the predetermined criteria selected from each of a preset task having been performed (Appendix: Example system, If the vehicle has reached the cone, then UM-PRS will issue an Off Road behavior. UM-PRS will wait until the vehicle has stopped and reached the end point. When the vehicle has reached the end point, then the demo is done), and the mobile apparatus having stopped moving, (Appendix: Example system, UM-PRS will wait until the vehicle has stopped and reached the end point. When the vehicle has reached the end point, then the demo is done). However, Lee still does not explicitly disclose, an intersection having been detected near the mobile apparatus, and a direction indicated by a manual operation command transmitted from the display apparatus having been changed. Nevertheless, Peerless who is in the same field of endeavor of logic and procedural reasoning for robots discloses, an intersection having been detected near the mobile apparatus, (5.3. Mission Editor, All inspection actions (e.g. Photo) can also be defined and executed at junction waypoints) and a direction indicated by a manual operation command transmitted from the display apparatus having been changed, (10.3, Sensing, Forward facing 18 MP digital video camera with live video transmission to web-browser interface) … (5.3 Mission Editor, Rotate the robot (and elevate the camera) using the left/right icons on the mission editor until the sensor is pointing at the POI … Tick the action to be performed. 8. Frame any image using drag and drop and then take the picture) … (5.2 Robot Control, Manual Control – the robot is being controlled from this control station by a driver). Claims 25-27, and 34, are rejected under 35 U.S.C. 103 as being unpatentable over Allard et al (US 20030216834A1) in view of Denuelle et al. (Snapshot-based Navigation for the Guidance of UAS), further in view of Lee at al (An implementation of the procedural reasoning system for multirobot applications), further in view of Peerless et al. (ExR-1 Robot Operating Guide), further in view of Courbon et al. (Vision-based navigation of unmanned aerial vehicle). Regarding claim 25, Allard, Denuelle, Lee, and Peerless disclose, the display apparatus of claim 24 as discussed supra. Additionally, Courbon who is in the same field of endeavor of vision based navigation discloses the one or more selected captured images include a captured image representing a final destination to which the mobile apparatus autonomously moves (Abstract, the UAV is controlled to reach each image of the visual route using a vision-based control law adapted to its dynamic model and without explicitly planning any trajectory) … (5.4 Paragraph 1, when the first target is reached, the key image 2 is set as the new target. When the key image 2 is reached, the key image 1 is set as the new target and so on (7 times)). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Allard, Denuelle, Lee, and Peerless to incorporate Courbon’s teachings for the benefit of having a dynamic navigational model that does not need explicit planning for any trajectory. Courbon’s teachings only use picture based navigation from pictures stored within the system. Justification for combining the combination of Allard, Denuelle, Lee, and Peerless with Courbon’s disclosures not only comes from the state of the art but from Allard (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 12, Lines 4-7, one of skill in the art will recognize that a user interface can be designed to meet the particular needs of the user, altering both the content of the user interface and the placement of any element within the display). Regarding claim 26, Allard, Denuelle, Lee, Peerless and Courbon disclose, the display apparatus of claim 25 as discussed supra. Additionally, Allard discloses the one or more selected captured images further include captured images each representing a waypoint to the final destination though which the mobile apparatus autonomously moves (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Lines 6-11, means for designating one or more waypoints located anywhere in the user-perceptible image towards which the robot will move, the waypoint in the user-perceptible image towards which the robot will first move being designated as the active waypoint using an icon, means for automatically converting the location of the active waypoint in the user-perceptible image into a target location having x, y, and z coordinates in the environment of the robot). Regarding claim 27, Allard, Denuelle, Lee, and Peerless disclose, the display apparatus of claim 24 as discussed supra. Additionally, Allard discloses the circuitry is further configured to receive the selection of the one or more captured images in an order (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 0070, Lines 1-7, if the waypoint is the only current waypoint (or the waypoint at the top of the waypoint list), the robot begins to move toward the selected waypoint 460. In other words, if the waypoint drive list was empty prior to the recent selection and the robot is in drive mode, then the robot will begin to drive towards that waypoint. If an additional selection is made, a second waypoint may be added to the list. As the robot gets to a waypoint, that waypoint will disappear from the heads-up display. If there are further waypoints in the current waypoint list, then the robot will immediately begin driving towards the second waypoint). However, Allard does not explicitly disclose, generate autonomous movement request information including information on the order in which the one or more captured images are selected, which causes the mobile apparatus to autonomously move to the one or more movement destinations in the order in which the corresponding one or more captured images are selected. Regarding claim 34, Allard, Denuelle, Lee, and Peerless disclose, the display apparatus of claim 24 as discussed supra. Additionally, Allard discloses the circuitry is further configured to: generate autonomous movement request information including information on the one or more selected captured images, (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 0070, Lines 1-7, if the waypoint is the only current waypoint (or the waypoint at the top of the waypoint list), the robot begins to move toward the selected waypoint 460. In other words, if the waypoint drive list was empty prior to the recent selection and the robot is in drive mode, then the robot will begin to drive towards that waypoint. If an additional selection is made, a second waypoint may be added to the list. As the robot gets to a waypoint, that waypoint will disappear from the heads-up display. If there are further waypoints in the current waypoint list, then the robot will immediately begin driving towards the second waypoint). However, Allard does not explicitly disclose, causes the mobile apparatus to autonomously move to the one or more movement destinations each corresponding to an area indicated by a corresponding one of the one or more selected captured images; and transmit the autonomous movement request information to the mobile apparatus. Nevertheless, Courbon discloses, causes the mobile apparatus to autonomously move to the one or more movement destinations each corresponding to an area indicated by a corresponding one of the one or more selected captured images; (Abstract, the robot navigation task is then defined as a concatenation of visual path subsets (called visual route) linking the current observed image and a target image belonging to the visual memory. The UAV is controlled to reach each image of the visual route using a vision-based control law adapted to its dynamic model and without explicitly planning any trajectory); and transmit the autonomous movement request information to the mobile apparatus (1.1 Paragraph 4, in the last stage (refer to Section 4), given an image of one of the visual paths as a target, the UAV navigation mission is defined as a concatenation of visual path subsets, called visual route. A navigation task then consists in autonomously executing a visual route, on-line and in real-time). Claim 28, is rejected under 35 U.S.C. 103 as being unpatentable over Allard et al (US 20030216834A1) in view of Denuelle et al. (Snapshot-based Navigation for the Guidance of UAS), further in view of Lee at al (An implementation of the procedural reasoning system for multirobot applications), further in view of Peerless et al. (ExR-1 Robot Operating Guide), further in view of Courbon et al. (Vision-based navigation of unmanned aerial vehicle), further in view of Masuko (US10451431B2). Regarding claim 28, Allard, Denuelle, Lee, Peerless and Courbon disclose the display apparatus of claim 24 as discussed supra. Additionally, Masuko who is in the same field of endeavor of route determination using imagery discloses, the circuitry is further configured to display an estimated time for the mobile apparatus to move to at least one of the one or more movement destinations (Masuko, Paragraph 61, Lines 1-9, various known techniques may be applicable to the method for calculating the estimated arrival time. For example, the display controlling unit 42 searches a travel route for the user leaving the visiting location closest to the time specified by the user to the position indicated by the location information of the image data 37. The display controlling unit 42 calculates an estimated arrival time based on the estimated traveling time obtained by dividing the searched travel route by a given travelling speed. For example, the display controlling unit 42 calculates, as the estimated arrival time, the time after the estimated traveling time elapses since the visiting time closest to the time specified by the user). One of ordinary skill in the art prior to the effective filing date of the given invention would have been motivated to combine the combination of Allard, Denuelle, Lee, Peerless and Courbon with Masuko to provide an ETA for when a mobile device will reach its intended target. This would serve to enable a user to plan what happens when the mobile apparatus arrives at said location, which can be critical for time sensitive operations such as shipping facilities. Justification for combining the combination of Allard, Denuelle, Lee, Peerless and Courbon with Masuko not only comes from the state of the art but from Allard, (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 12, Lines 4-7, one of skill in the art will recognize that a user interface can be designed to meet the particular needs of the user, altering both the content of the user interface and the placement of any element within the display). Claims 30 and 26, are rejected under 35 U.S.C. 103 as being unpatentable over Allard et al (US 20030216834A1) in view of Denuelle et al. (Snapshot-based Navigation for the Guidance of UAS), further in view of Lee at al (An implementation of the procedural reasoning system for multirobot applications), further in view of Peerless et al. (ExR-1 Robot Operating Guide), further in view of Courbon et al. (Vision-based navigation of unmanned aerial vehicle), further in view of Fleischman et al (US10467804B2). Regarding claim 30, Allard, Denuelle, Lee, Peerless and Courbon disclose the display apparatus of claim 24 as discussed supra. Furthermore, Fleischman, who is in the same field of endeavor of image synced location visualization discloses, the circuitry is further configured to display a map image indicating a location where each of the one or more selected captured images was captured (Fleischman, Detailed Description, Paragraph 3, Lines 9-11, in one embodiment, the visualization interface displays the immersive model of the environment as both a 2D map and a first-person view. Each image is represented on the 2D map as an icon at the location at which the image was captured. The user can select an icon to display the image that was captured at the corresponding location). One of ordinary skill in the art prior to the effective filing date of the given invention would have been motivated to combine the combination of Allard, Denuelle, Lee, Peerless and Courbon with Fleischman. This would serve to pin point precise locations in an environment a mobile apparatus recorded visual data. This will allow for an improved ability to plan where a user might desire the mobile system to move. Justification for combining the combination of Allard, Denuelle, Lee, Peerless and Courbon with Fleischman not only comes from the state of the art but from Allard, (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 16, Lines 2-8, the overhead map 340 contains a representation of the robot 342 and additional graphical information about the robot's immediate surroundings. This display can either be a world-orientation based map (i.e. the robot rotates within it) or, as shown in FIG. 7, a map relative to the robot's orientation (i.e. the robot 342 always appears to be pointed in the same direction). This map can include waypoints and additional information such as architectural features such as a wall 344, previous path(s) traveled, direction vectors, etc.). Regarding claim 36, Allard, Denuelle, Lee, Peerless and Courbon disclose the communication system of claim 35 as discussed supra. Furthermore, Allard discloses, the mobile apparatus includes another circuitry configured to: generate route information indicating the travel route of the mobile apparatus based on location information of each of the one or more selected captured images (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 41, Lines 9-12, each active waypoint is automatically converted into a target location having x, y, and z coordinates in the robot's environment that cause the robot to move from its current location in the environment to the target location in the environment), and control the mobile apparatus to autonomously move based on the travel route indicated by the route information (Allard, Abstract, Lines 5-8, means for designating one or more waypoints located anywhere in the user-perceptible image towards which the robot will move, the waypoint in the user-perceptible image towards which the robot will first move being designated as the active waypoint using an icon). However, Allard does not explicitly disclose, each location information indicating a location where the corresponding one of the one or more selected captured images was captured. Nevertheless, Fleischman discloses, each location information indicating a location where the corresponding one of the one or more selected captured images was captured (Fleischman, Detailed Description, Paragraph 3, Lines 9-11, in one embodiment, the visualization interface displays the immersive model of the environment as both a 2D map and a first-person view. Each image is represented on the 2D map as an icon at the location at which the image was captured. The user can select an icon to display the image that was captured at the corresponding location). Claims 41 and 43, are rejected under 35 U.S.C. 103 as being unpatentable over Allard et al (US 20030216834A1) in view of Denuelle et al. (Snapshot-based Navigation for the Guidance of UAS), further in view of Lee at al (An implementation of the procedural reasoning system for multirobot applications), further in view of Peerless et al. (ExR-1 Robot Operating Guide), further in view of Courbon et al. (Vision-based navigation of unmanned aerial vehicle), further in view of Han et al. (CN111724438A). Regarding claim 41, Allard, Denuelle, Lee, Peerless and Courbon disclose the communication system of claim 35 as discussed supra. Furthermore, Han who is in the same field of endeavor of position determination based on pictures disclose, a memory (Disclosure of Invention, Paragraph, the processor, coupled with the memory and the communication component, is configured to execute a computer program for performing the steps or operations of the data processing method), wherein the mobile apparatus includes another circuitry configured to: capture a respective one of the plurality of captured images (Interpretation of terms, Paragraph 17, images of the environment around the running track of the robot are collected at a speed of 10fps) … (after the mapping is completed, an image sequence (a plurality of key frame images) of off-line mapping and positioning information (such as position and/or posture information) corresponding to each key frame image can be obtained), and based upon at least one of predetermined criteria or a request by the user: register a respective one of the plurality of captured images as the candidate of the movement destination of the mobile apparatus (Disclosure of Invention, comparing the global features of the first image with the global features of all the key frame images in the location database to determine the key frame image with the maximum similarity, wherein if the maximum similarity is greater than a preset similarity threshold, the key frame image with the maximum similarity is determined to be the matched key frame image) and store a respective one of the plurality of captured images captured (Interpretation of terms, based on the established location database, after the matched key frame image is determined, other key frame images with a distance smaller than a preset distance threshold value with the matched key frame image need to be determined) and registered previously as the candidate of the movement destination of the mobile apparatus to the memory (Disclosure of Invention, establishing a mapping relation among global features, local features, positioning information, key points and three-dimensional point coordinates of the key points of each key frame image; and storing the mapping relation into a location database). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Allard, Denuelle, Lee, Peerless and Courbon to incorporate Han’s teachings for the benefit of streamlining navigational efforts by storing pictures in a databases that can be retrieved for comparison with a target destination. Justification for combining the combination of Allard, Denuelle, Lee, Peerless and Courbon with Han not only comes from the state of the art but from Allard (Allard, DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT, Paragraph 12, Lines 4-7, one of skill in the art will recognize that a user interface can be designed to meet the particular needs of the user, altering both the content of the user interface and the placement of any element within the display). Regarding claim 43, Allard, Denuelle, Lee, Peerless and Courbon disclose, the display apparatus of claim 24 as discussed supra. Additionally, Han discloses, each of the captured images, which was captured and registered in advance as the candidate of the movement destination of the mobile apparatus, was also stored in in association with a candidate identifier and location information (Disclosure of Invention, establishing a mapping relation among global features, local features, positioning information, key points and three-dimensional point coordinates of the key points of each key frame image; and storing the mapping relation into a location database). Claim 44 is rejected under 35 U.S.C. 103 as being unpatentable over Allard et al (US 20030216834A1) in view of Denuelle et al. (Snapshot-based Navigation for the Guidance of UAS), further in view of Lee at al (An implementation of the procedural reasoning system for multirobot applications), further in view of Peerless et al. (ExR-1 Robot Operating Guide), further in view of Gavrilut (Mobile Robot Navigation based on CNN Images Processing – An Experimental Setup). Regarding claim 44, Allard, Denuelle, Lee, and Peerless disclose the display apparatus of claim 24, as discussed supra. Additionally, Gavrilut who is in the same field of endeavor of mobile robot navigation based on images discloses, the circuitry is further configured to receive a selection of at least one of the plurality of captured images as an exclusion route to be excluded from the travel route through which the mobile apparatus autonomously moves (3 CNN based Image Processing, the robot and target positions are each identified by a single pixel. In our example, the occupied pixels having values +1 (black) represent the forbidden positions where the robot can’t move and the pixels having values –1 (white) represent the free positions accessible for the mobile robot). It would have been prima facie obvious to one of ordinary skill in t
Read full office action

Prosecution Timeline

Aug 10, 2023
Application Filed
Feb 05, 2025
Non-Final Rejection — §103
May 09, 2025
Response Filed
Jul 14, 2025
Final Rejection — §103
Oct 28, 2025
Request for Continued Examination
Nov 06, 2025
Response after Non-Final Action
Nov 29, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592101
INFORMATION COMMUNICATION DEVICE OF VEHICLE, INFORMATION MANAGEMENT SERVER, AND INFORMATION COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
39%
With Interview (+22.2%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month