Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,844

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND INFORMATION PROCESSING DEVICE

Final Rejection §103
Filed
Feb 20, 2024
Examiner
RORIE, ALYSSA N
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sony Group Corporation
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
97%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
59 granted / 76 resolved
+25.6% vs TC avg
Strong +20% interview lift
Without
With
+19.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
18 currently pending
Career history
94
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
0.6%
-39.4% vs TC avg
§112
26.9%
-13.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 76 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-19 are pending. Claims 1, 18, and 19 have been amended. Response to Amendment Objections to Drawings: Applicant’s replacement sheet overcomes the drawing objections. Objections to the drawings are withdrawn. Objections to Specification: Applicant’s amendment to the specification overcomes the specification objections. Objections to the specification are withdrawn. Claim Interpretation Under 35 U.S.C. §112(f): Applicant’s amendments to the claims overcome the claim interpretation under 112(f). Claim interpretation under 112(f) has been withdrawn. Rejections Under 35 U.S.C. §101: Applicant’s amendments to the claims overcome the rejection of record. The 101 rejection is withdrawn. Rejections Under 35 U.S.C. §103: Claims 1, 18, and 19 have been amended to change the scope of the claimed invention. Specifically, limitations pertaining to “the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof” which changes the scope of the claimed invention. Response to Arguments Applicant’s arguments with respect to claims 1-19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (US2019/0220002A1) in view of Matsuki (JP2018110352A) in further view of Guo et al. (US2018/0210442A1), hereinafter Huang, Matsuki, and Guo respectively. Regarding claim 1, (Currently Amended) Huang teaches an information processing method executed by one processor or executed by a plurality of processors in cooperation, the information processing method comprising: a first acquisition step for acquiring map information (see at least [0121] “The environmental sensing unit 616 may be configured to obtain environmental information 644 using one or more sensors, as previously described with reference to FIGS. 4 and 5. The environmental information may comprise an environmental map. The environmental map may comprise a topological map or a metric map.”); a second acquisition step for acquiring current position information of a flight vehicle (see at least [0096] “one or more of the sensors in the environmental sensing unit may be configured to provide data regarding a state of the movable object. The state information provided by a sensor can include information regarding a spatial disposition of the movable object (e.g., position information such as longitude, latitude, and/or altitude; orientation information such as roll, pitch, and/or yaw).”); a third acquisition step for acquiring information concerning a virtual viewpoint for a user to check the flight vehicle in an image (see at least [0072] “the image data may be provided in a 3D virtual environment that is displayed on the user terminal (e.g., virtual reality system or augmented reality system). The 3D virtual environment may optionally correspond to a 3D map. The virtual environment may comprise a plurality of points or objects that can be manipulated by a user. The user can manipulate the points or objects through a variety of different actions in the virtual environment. Examples of those actions may include selecting one or more points or objects, drag-and-drop, translate, rotate, spin, push, pull, zoom-in, zoom-out, etc. Any type of movement action of the points or objects in a three-dimensional virtual space may be contemplated. A user may use the user terminal to manipulate the points or objects in the virtual environment to control a flight path of the UAV and/or motion characteristic(s) of the UAV. A user may also use the user terminal to manipulate the points or objects in the virtual environment to control motion characteristic(s) and/or different functions of the imaging device.”); and a generation step for generating a virtual viewpoint image, which is an image viewed from the virtual viewpoint, based on the map information, the current position information of the flight vehicle, and the information concerning the virtual viewpoint (see at least [0159] “The FPV 932 may comprise augmented stereoscopic video data. The augmented stereoscopic video data may be generated by fusing stereoscopic video data and environmental/motion information as described elsewhere herein.” and [0130] “The motion information may also include one or more of the following: location in global or local coordinates, attitude, altitude, spatial disposition, velocity, acceleration, directional heading, distance traveled, state of battery power, and/or health of one or more components on the movable object.” and [0229] “the 3D environment may comprise a plurality of virtual objects. The virtual objects may be graphical solid objects or graphical wireframes. The virtual objects may comprise points or objects that may be of interest to a user. Points or objects that may be of less interest to the user may be omitted from the 3D virtual environment to reduce object clutter and to more clearly delineate points/objects of interest. The reduced clutter makes it easier for the user to select or identify a desired point or object of interest from the 3D virtual environment.” also see at least [0071]). Examiner interprets that information concerning a virtual viewpoint for a user to check the flight vehicle in an image is encompassed at least by user can manipulate the points or objects through a variety of different actions in the virtual environment and virtual viewpoint image is encompassed at least by augmented stereoscopic video data and/or (3D) virtual environment. Huang does not explicitly teach the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof. Matsuki suggests the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof (see at least [0065] “Further, when the flight device 2 is out of the path of flying the flying device 2, the viewpoint setting portion 152 sets the viewpoint setting portion 152 so that the flight image 2 includes the route indicator 311 indicating the flight route and the flying device icon 305 indicating the flying device 2, Set a virtual viewpoint. Thus, the bird's-eye view display area 304 displayed by the display control unit 153 on the display unit 12 includes the route indicator 311 indicating the flight route and the flying device icon 305 indicating the flying device 2. By viewing the bird's-eye view display region 304, the user can grasp the direction to move the flying device 2 in order to return the flying device 2 to the flight path.”). Examiner interprets that a 3D model of the flight vehicle is suggested at least by flying device icon 305 indicating the flying device 2. Guo more explicitly teaches the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof (see at least [0101] “It should be noted that the vehicle 402 or the mobile device 104 may insert a model (e.g., 3D model) or representation of the vehicle 402 (e.g., car, drone, etc.) in a 3D surround view 416. ” also see at least [0109] and [0112]). Examiner interprets that virtual viewpoint image is encompassed at least by 3D surround view 416. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Huang with the suggested teaching of the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof found in Matsuki and the more explicit teaching of the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof found in Guo. One could combine the teachings in order to have an information processing method executed by one processor or executed by a plurality of processors in cooperation, the information processing method comprising: a first acquisition step for acquiring map information; a second acquisition step for acquiring current position information of a flight vehicle; a third acquisition step for acquiring information concerning a virtual viewpoint for a user to check the flight vehicle in an image; and a generation step for generating a virtual viewpoint image, which is an image viewed from the virtual viewpoint, based on the map information, the current position information of the flight vehicle, and the information concerning the virtual viewpoint, the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof with a reasonable expectation of success. One would have been motivated to do so in order to improve user experience during operation of movable objects such as unmanned aerial vehicles (UAV) (see at least Huang, [0038] and [0179]) and further to improve operability of a flying device that deviates from a user’s visual field (see at least Matsuki, [0005]). Regarding claim 2, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 1 as detailed above. Huang teaches wherein the virtual viewpoint can be changed by operation of the user (see at least [0136] “the AR layer may comprise input regions in the augmented FPV that allow the user to interact with one or more graphical elements in the FPV.” also see at least [0227] and [0229]), and in the third acquisition step, information concerning the virtual viewpoint specified based on input of the user is acquired (see at least [0072] “In some embodiments, the image data may be provided in a 3D virtual environment that is displayed on the user terminal (e.g., virtual reality system or augmented reality system). The 3D virtual environment may optionally correspond to a 3D map. The virtual environment may comprise a plurality of points or objects that can be manipulated by a user. The user can manipulate the points or objects through a variety of different actions in the virtual environment. Examples of those actions may include selecting one or more points or objects, drag-and-drop, translate, rotate, spin, push, pull, zoom-in, zoom-out, etc. Any type of movement action of the points or objects in a three-dimensional virtual space may be contemplated. A user may use the user terminal to manipulate the points or objects in the virtual environment to control a flight path of the UAV and/or motion characteristic(s) of the UAV. A user may also use the user terminal to manipulate the points or objects in the virtual environment to control motion characteristic(s) and/or different functions of the imaging device.”). Regarding claim 3, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 2 as detailed above. Huang teaches wherein a line-of-sight direction from the virtual viewpoint can be changed by operation of the user (see at least [0198] “Any other user interface tools or techniques may be provided that can allow a user to specify a target object or a target direction in the FPV. The user may select the target object or the target direction by selecting a portion of an image in the FPV with aid of a user interactive external device (e.g., handheld controller, mouse, joystick, keyboard, trackball, touchpad, button, verbal commands, gesture-recognition, attitude sensor, thermal sensor, touch-capacitive sensors, or any other device), as described elsewhere herein.”), and in the generation step, the virtual viewpoint image viewed from the virtual viewpoint in the line-of-sight direction is generated based on the map information, the current position information of the flight vehicle, the information concerning the virtual viewpoint, and the information concerning the line-of-sight direction specified based on the input of the user (see at least [0200] “The augmented stereoscopic video data can be generated by fusing the stereoscopic video data with environmental/motion information.” and [0229] “the 3D environment may comprise a plurality of virtual objects. The virtual objects may be graphical solid objects or graphical wireframes. The virtual objects may comprise points or objects that may be of interest to a user. Points or objects that may be of less interest to the user may be omitted from the 3D virtual environment to reduce object clutter and to more clearly delineate points/objects of interest. The reduced clutter makes it easier for the user to select or identify a desired point or object of interest from the 3D virtual environment.” and [0198] “Any other user interface tools or techniques may be provided that can allow a user to specify a target object or a target direction in the FPV.”). Examiner interprets that line-of-sight direction is encompassed at least by target direction. Regarding claim 4, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 2 as detailed above. Huang teaches comprising: a fourth acquisition step for acquiring operation input of the user relating to flight control of the flight vehicle (see at least [0042] “The display device may be configured to receive stereoscopic video data transmitted from the movable object, and display a FPV 132 of the environment based on the stereoscopic video data. The user terminal can be used to control one or more motion characteristics of the movable object and/or a payload supported by the movable object. For example, a user can use the user terminal to visually navigate and control operation (e.g., movement) of the movable object and/or one or more imaging devices onboard the movable object, based on the FPV of the environment.”); and a conversion step for converting the operation input of the user into control information for flight control of the flight vehicle, wherein in the conversion step, a method of converting the operation input into the control information is changed according to a position of the virtual viewpoint (see at least [0202] “In some embodiments, a sensor 1337 on the HMD can capture a user's head movement, such as rotation about an axis (e.g., pitch, roll, or yaw rotation), as well as forward and backward movement. The head movement information can be converted into a control signal and sent to the movable object in order to control the movement of the movable object 1302 and/or imaging device 1306.” also see at least [0042] and [0207]). Examiner interprets that operation input of the user is encompassed at least by head movement information. Regarding claim 5, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 1 as detailed above. Huang teaches wherein the information concerning the virtual viewpoint is relative position information based on a position of the flight vehicle (see at least [0012] “The environmental information may comprise (1) relative positions between a movable object and one or more objects in the environment, and/or (2) relative positions between two or more objects in the environment. The environmental information may comprise a distance of a movable object from an object in the environment, and/or an orientation of the movable object relative to the object.”). Regarding claim 6, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 5 as detailed above. Huang suggests wherein the virtual viewpoint image is an image obliquely looking down the flight vehicle and a periphery of the flight vehicle from the virtual viewpoint (see at least [0229] “Although various embodiments of the disclosure have been described with reference to a 3-D FPV, it should be appreciated that other types of views may be presented in alternative or in conjunction with the 3-D FPV. For instance, in some embodiments, the map view 1684 in FIG. 16 can be a 3D map instead of a 2D map. The 3D map may be alterable to view the 3D environment from various angles.”). However, Guo more explicitly teaches wherein the virtual viewpoint image is an image obliquely looking down the flight vehicle and a periphery of the flight vehicle from the virtual viewpoint (see at least [0104] “The 3D surround view 416 may be presented from a viewpoint (e.g., perspective, camera angle, etc.). For example, the 3D surround view 416 may be presented from a top-down viewpoint, a back-to-front viewpoint (e.g., raised back-to-front, lowered back-to-front, etc.), a front-to-back viewpoint (e.g., raised front-to-back, lowered front-to-back, etc.), an oblique viewpoint (e.g., hovering behind and slightly above, other angled viewpoints, etc.), etc. Additionally or alternatively, the 3D surround view 416 may be rotated and/or shifted.”). It would have been obvious to one having ordinary skill in the art before the effective filing of the claimed invention to modify the suggested teaching of Huang of wherein the virtual viewpoint image is an image obliquely looking down the flight vehicle and a periphery of the flight vehicle from the virtual viewpoint with the more explicit teaching of the same found in Guo with a reasonable expectation of success. One would have been motivated to do so in order to improve user experience during operation of movable objects such as unmanned aerial vehicles (UAV) (see at least Huang [0038] and [0179]). Regarding claim 7, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 5 as detailed above. Huang suggests wherein the virtual viewpoint is located above the flight vehicle, and the virtual viewpoint image is an image looking down the flight vehicle and a periphery of the flight vehicle right below from the virtual viewpoint (see at least [0229] “Although various embodiments of the disclosure have been described with reference to a 3-D FPV, it should be appreciated that other types of views may be presented in alternative or in conjunction with the 3-D FPV. For instance, in some embodiments, the map view 1684 in FIG. 16 can be a 3D map instead of a 2D map. The 3D map may be alterable to view the 3D environment from various angles.”). However, Guo more explicitly teaches wherein the virtual viewpoint is located above the flight vehicle, and the virtual viewpoint image is an image looking down the flight vehicle and a periphery of the flight vehicle right below from the virtual viewpoint (see at least [0104] “The 3D surround view 416 may be presented from a viewpoint (e.g., perspective, camera angle, etc.). For example, the 3D surround view 416 may be presented from a top-down viewpoint, a back-to-front viewpoint (e.g., raised back-to-front, lowered back-to-front, etc.), a front-to-back viewpoint (e.g., raised front-to-back, lowered front-to-back, etc.), an oblique viewpoint (e.g., hovering behind and slightly above, other angled viewpoints, etc.), etc. Additionally or alternatively, the 3D surround view 416 may be rotated and/or shifted.”). It would have been obvious to one having ordinary skill in the art before the effective filing of the claimed invention to modify the suggested teaching of Huang of wherein the virtual viewpoint is located above the flight vehicle, and the virtual viewpoint image is an image looking down the flight vehicle and a periphery of the flight vehicle right below from the virtual viewpoint with the more explicit teaching of the same found in Guo with a reasonable expectation of success. One would have been motivated to do so in order to improve user experience during operation of movable objects such as unmanned aerial vehicles (UAV) (see at least Huang [0038] and [0179]). Regarding claim 8, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 1 as detailed above. Huang teaches comprising a display control step for displaying the virtual viewpoint image on a screen (see at least [0007] “The method may comprise: displaying, on a terminal remote to a movable object, a first person view (FPV) of the environment based on augmented stereoscopic video data, wherein the augmented stereoscopic video data is generated by incorporating: (a) stereoscopic video data generated by the movable object while operating in the environment, and (b) environmental information.” also see at least [0068] “The user terminal may include a display (or display device). The display may be a screen. The display may be a light-emitting diode (LED) screen, OLED screen, liquid crystal display (LCD) screen, plasma screen, or any other type of screen. The display may or may not be a touchscreen. The display may be configured to show a graphical user interface (GUI). The GUI may show an image or a FPV that permit a user to control actions of the UAV.”). Regarding claim 9, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 8 as detailed above. Huang teaches wherein a camera is mounted on the flight vehicle (see at least [0049] “The movable object may be configured to support an onboard payload 106.” and [0053] “In FIG. 1, the payload 106 may be an imaging device. In some embodiments, the imaging device may be a multi-ocular video camera.” also see at least [0052]), and in the display control step, a captured image captured by the camera mounted on the flight vehicle is displayed on the screen (see at least [0070] “The image on the display may show a view obtained with aid of a payload of the movable object. For instance, an image captured by the imaging device may be shown on the display.”). Regarding claim 10, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 9 as detailed above. Huang teaches wherein, in the display control step, an image displayed on the screen is switched from the virtual viewpoint image to the captured image based on operation of the user (see at least [0071] “The views may be toggled between one or more FPV and one or more map view, or the one or more FPV” and [0245] “the terminal 1812 can provide control data to one or more of the movable object 1800, carrier 1802, and payload 1804 and receive information from one or more of the movable object 1800, carrier 1802, and payload 1804 (e.g., position and/or motion information of the movable object, carrier or payload; data sensed by the payload such as image data captured by a payload camera)...The control data from the terminal may result in control of the payload, such as control of the operation of a camera or other image capturing device (e.g., taking still or moving pictures, zooming in or out, turning on or off, switching imaging modes, change image resolution, changing focus, changing depth of field, changing exposure time, changing viewing angle or field of view).”). Regarding claim 11, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 9 as detailed above. Huang teaches wherein, in the display control step, the captured image is displayed on the screen in addition to the virtual viewpoint image (see at least [0071] “ The map may be a two-dimensional map or a three-dimensional map. The views may be toggled between a two-dimensional and a three-dimensional map view, or the two-dimensional and three dimensional map views may be shown simultaneously. A user may use the user terminal to select a portion of the map to specify a target and/or direction of motion by the movable object. The views may be toggled between one or more FPV and one or more map view, or the one or more FPV and one or more map view may be shown simultaneously.”). Regarding claim 12, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 8 as detailed above. Huang teaches wherein in the first acquisition step, 3D map information is acquired as the map information (see at least [0121] “The environmental sensing unit 616 may be configured to obtain environmental information 644 using one or more sensors, as previously described with reference to FIGS. 4 and 5. The environmental information may comprise an environmental map. The environmental map may comprise a topological map or a metric map. The metric map may comprise at least one of the following: a point cloud, a 3D grid map” also see at least [0126] “The environmental map may be a 3D map of the environment surrounding the movable object.”), and in the generation step, the virtual viewpoint image of 3D viewed from the virtual viewpoint is generated based on the 3D map information, the current position information of the flight vehicle, and the information concerning the virtual viewpoint (see at least [0159] “The FPV 932 may comprise augmented stereoscopic video data. The augmented stereoscopic video data may be generated by fusing stereoscopic video data and environmental/motion information as described elsewhere herein.” and [0130] “The motion information may also include one or more of the following: location in global or local coordinates, attitude, altitude, spatial disposition, velocity, acceleration, directional heading, distance traveled, state of battery power, and/or health of one or more components on the movable object.” and [0229] “the 3D environment may comprise a plurality of virtual objects. The virtual objects may be graphical solid objects or graphical wireframes. The virtual objects may comprise points or objects that may be of interest to a user. Points or objects that may be of less interest to the user may be omitted from the 3D virtual environment to reduce object clutter and to more clearly delineate points/objects of interest. The reduced clutter makes it easier for the user to select or identify a desired point or object of interest from the 3D virtual environment.” also see at least [0071]). Examiner interprets that 3D map information is encompassed at least by environmental information, environmental map, and/or 3D grid map. Regarding claim 13, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 12 as detailed above. Huang teaches comprising an estimation step for estimating a flyable area of the flight vehicle (see at least [0231] “In the example of FIG. 17, the FPV may be captured by a camera on a UAV. The UAV may be configured to detect or identify flight-restricted regions from an environmental map as the UAV moves within the environment.”), wherein in the generation step, display concerning the estimated flyable area is added to the virtual viewpoint image (see at least [0230] “FIG. 17 shows an example of a user interface (UI) in an augmented FPV displaying flight restricted regions, in accordance with some embodiments. FIG. 17 is similar to FIG. 9 except the augmented FPV in FIG. 17 is further configured to display one or more flight-restricted regions.”), and in the display control step, the virtual viewpoint image to which the display concerning the flyable area is added is displayed on the screen (see at least [0230] “An augmented FPV 1732 of an environment may be displayed on a display device 1730. The FPV may include images 1750-1', 1750-2', and 1750-3' of a first object, second object, and third object, respectively that are located in the environment. A plurality of flight-restriction regions 1760 and 1762 may be displayed in the augmented FPV. The flight-restriction region 1760 may be displayed surrounding the image 1750-1' of the first object. The flight-restriction region 1762 may be displayed surrounding the image 1750-3' of the third object. The flight-restriction regions may be displayed having any visual marking scheme. For example, the flight-restriction regions may be displayed having any shape (e.g., regular or irregular shape), size, dimension, color, in 2-D or 3-D, etc.”). Regarding claim 14, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 13 as detailed above. Huang teaches wherein the 3D map information includes information concerning an object obstructing flight of the flight vehicle (see at least [0126] “The obstacle avoidance unit may also determine whether the movable object will collide with the one or more obstacles based on an environmental map. The environmental map may be a 3D map of the environment surrounding the movable object.” also see at least [0086] “The environmental information may also contain information about obstacles or potential obstacles in a motion path of the movable object.”). Huang suggests in the estimation step, a movable plane of the flight vehicle is estimated as the flyable area of the flight vehicle based on the 3D map information, in the generation step, display of the movable plane of the flight vehicle is added to the virtual viewpoint image, and in the display control step, the virtual viewpoint image to which the display of the movable plane of the flight vehicle is added is displayed on the screen (see at least [0156] “Different boundaries may be defined around the movable object. The boundaries may be used to determine a relative proximity of one or more objects in the environment to the movable object, as described later in detail with reference to FIG. 9. A boundary can be defined by a regular shape or an irregular shape. As shown in FIG. 8, the boundaries may be defined by circles (in 2-D space) or spheres (in 3-D space) having different radii. (See, e.g., FIG. 5). For example, an edge of a first boundary may be at a distance d1 from the center of the movable object. An edge of a second boundary may have a distance d2 from the center of the movable object. An edge of a third boundary may have a distance d3 from the center of the movable object. The distances d1, d2, and d3 may correspond to the respective radius of the first, second, and third boundaries. In the example of FIG. 8, d1 may be greater than d2, and d2 may be greater than d3. The circles/spheres defined by the boundaries may or may not be concentric to one another. In some alternative embodiments, a center of a shape defined by each boundary need not lie at a center of the movable object. The boundaries can be defined in any manner around the movable object, along one or more planes and/or in 3-dimensional space.”). Guo suggests in the estimation step, a movable plane of the flight vehicle is estimated as the flyable area of the flight vehicle based on the 3D map information (see at least [0147] “For a given point in the 3D surround view 1416, the corresponding coordinates in the 2D bird's-eye view 1428 may be determined by applying Equations 1-4. Given point-A' 1456a and point-B' 1456b, the motion vector 1448 (M', α) will be obtained on the ground plane. The vehicle 102 will be moved accordingly. In this Figure, M' 1448 is the 2D translation and a 1464 is the 2D rotation.” also see at least Fig. 12), in the generation step, display of the movable plane of the flight vehicle is added to the virtual viewpoint image, and in the display control step, the virtual viewpoint image to which the display of the movable plane of the flight vehicle is added is displayed on the screen (see at least [0095] “In an implementation, the mobile device 104 may display a motion vector 448 in the 3D surround view 416. The motion vector 448 may be generated based on user input 125 indicating vehicle motion. For example, the user may drag the virtual vehicle 402 on the touchscreen 114 in a certain direction. The mobile device 104 may display the motion vector 448 as visual feedback to the user to assist in maneuvering the vehicle 402.” also see at least [0119] “The mobile device 104 may convert 808 the user input 125 to a 2D instruction 123 for moving the vehicle 102. The mobile device 104 may map the user input 125 in the 3D surround view 116 to a motion vector in a 2D bird's-eye view of the vehicle 102. The 2D instruction 123 may include the motion vector mapped to a ground plane of the vehicle 102.”). Examiner interprets that movable plane is encompassed at least by ground plane, and the movable plane of the flight vehicle is added to the virtual viewpoint image is encompassed at least by the mobile device may display a motion vector in the 3D surround view as the motion vector is mapped to the ground plane. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claim invention to modify the suggested teaching of Huang of in the estimation step, a movable plane of the flight vehicle is estimated as the flyable area of the flight vehicle based on the 3D map information, in the generation step, display of the movable plane of the flight vehicle is added to the virtual viewpoint image, and in the display control step, the virtual viewpoint image to which the display of the movable plane of the flight vehicle is added is displayed on the screen with the suggested teaching of the same found in Guo. One could combine the teachings in order to teach an information processing method wherein the 3D map information includes information concerning an object obstructing flight of the flight vehicle, in the estimation step, a movable plane of the flight vehicle is estimated as the flyable area of the flight vehicle based on the 3D map information, in the generation step, display of the movable plane of the flight vehicle is added to the virtual viewpoint image, and in the display control step, the virtual viewpoint image to which the display of the movable plane of the flight vehicle is added is displayed on the screen with a reasonable expectation of success. One would have been motivated to do so in order to improve user experience during operation of movable objects such as unmanned aerial vehicles (UAV) (see at least Huang [0038] and [0179]). Regarding claim 15, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 12 as detailed above. Huang teaches comprising a fifth acquisition step for acquiring input of the user concerning a flight trajectory of the flight vehicle (see at least [0150] “a user may manually make the motion path adjustments using a remote controller for controlling the movable object, and/or through the FPV user interface on the display device.”), wherein in the generation step, display of a flight plan trajectory of the flight vehicle specified based on the input of the user is added to the virtual viewpoint image (see at least [0217] “a visual marker may be provided within the image indicative of the motion path to the target object. The visual marker may be a point, region, icon, line, or vector. For instance, the line or vector may be indicative of a direction of the motion path towards the target. In another example, the line or vector may be indicative of the direction that the movable object is heading.”), and in the display control step, the virtual viewpoint image to which the display of the flight plan trajectory is added is displayed (see at least [0217] “When a user selects a portion of the 3-D FPV to specify a target, a motion path to the selected target may or may not be visually indicated on the display.” also see at least [0126] “the obstacle avoidance unit may be configured to overlay a trajectory indicative of the motion path onto the environmental map,”). Examiner interprets that flight trajectory is encompassed at least by motion path, and flight plan trajectory is encompassed at least by motion path, trajectory indicative of the motion path, and/or flight path. Regarding claim 16, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 15 as detailed above. Huang teaches comprising a flight control step for controlling flight of the flight vehicle based on the flight plan trajectory of the flight vehicle (see at least [0072] “A user may use the user terminal to manipulate the points or objects in the virtual environment to control a flight path of the UAV and/or motion characteristic(s) of the UAV.” also see at least [0075] and [0221]). Examiner interprets that flight plan trajectory is encompassed at least by flight path, trajectory indicative of the motion path, and/or motion characteristics. Regarding claim 17, (Original) the combination of Huang, Matsuki, and Guo teaches the information processing method according to claim 1 as detailed above. Huang teaches wherein the flight vehicle is a drone (see at least [0045] “the movable object may be an unmanned aerial vehicle (UAV).”). Regarding claim 18, (Currently Amended) Huang teaches a non-transitory computer-readable medium storing computer- readable instructions that, when executed by a computer, cause the computer to perform a method comprising (see at least [0006] “In another aspect of the disclosure, a non-transitory computer-readable medium storing instructions that, when executed, causes a computer to perform a method for generating a first person view (FPV) of an environment is provided.”): An information processing program for causing one or a plurality of computers to function as: a first acquisition unit that acquires acquiring map information (see at least [0121] “The environmental sensing unit 616 may be configured to obtain environmental information 644 using one or more sensors, as previously described with reference to FIGS. 4 and 5. The environmental information may comprise an environmental map. The environmental map may comprise a topological map or a metric map.”); a second acquisition unit that acquires acquiring current position information of a flight vehicle (see at least [0096] “one or more of the sensors in the environmental sensing unit may be configured to provide data regarding a state of the movable object. The state information provided by a sensor can include information regarding a spatial disposition of the movable object (e.g., position information such as longitude, latitude, and/or altitude; orientation information such as roll, pitch, and/or yaw).”); a third acquisition unit that acquires acquiring information concerning a virtual viewpoint for a user to check the flight vehicle in an image (see at least [0072] “the image data may be provided in a 3D virtual environment that is displayed on the user terminal (e.g., virtual reality system or augmented reality system). The 3D virtual environment may optionally correspond to a 3D map. The virtual environment may comprise a plurality of points or objects that can be manipulated by a user. The user can manipulate the points or objects through a variety of different actions in the virtual environment. Examples of those actions may include selecting one or more points or objects, drag-and-drop, translate, rotate, spin, push, pull, zoom-in, zoom-out, etc. Any type of movement action of the points or objects in a three-dimensional virtual space may be contemplated. A user may use the user terminal to manipulate the points or objects in the virtual environment to control a flight path of the UAV and/or motion characteristic(s) of the UAV. A user may also use the user terminal to manipulate the points or objects in the virtual environment to control motion characteristic(s) and/or different functions of the imaging device.”); and a generation unit that generates generating a virtual viewpoint image, which is an image viewed from the virtual viewpoint, based on the map information, the current position information of the flight vehicle, and the information concerning the virtual viewpoint (see at least [0159] “The FPV 932 may comprise augmented stereoscopic video data. The augmented stereoscopic video data may be generated by fusing stereoscopic video data and environmental/motion information as described elsewhere herein.” and [0130] “The motion information may also include one or more of the following: location in global or local coordinates, attitude, altitude, spatial disposition, velocity, acceleration, directional heading, distance traveled, state of battery power, and/or health of one or more components on the movable object.” and [0229] “the 3D environment may comprise a plurality of virtual objects. The virtual objects may be graphical solid objects or graphical wireframes. The virtual objects may comprise points or objects that may be of interest to a user. Points or objects that may be of less interest to the user may be omitted from the 3D virtual environment to reduce object clutter and to more clearly delineate points/objects of interest. The reduced clutter makes it easier for the user to select or identify a desired point or object of interest from the 3D virtual environment.” also see at least [0071]). Examiner interprets that information concerning a virtual viewpoint for a user to check the flight vehicle in an image is encompassed at least by user can manipulate the points or objects through a variety of different actions in the virtual environment and virtual viewpoint image is encompassed at least by augmented stereoscopic video data and/or (3D) virtual environment. Huang does not explicitly teach the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof. Matsuki suggests the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof (see at least [0065] “Further, when the flight device 2 is out of the path of flying the flying device 2, the viewpoint setting portion 152 sets the viewpoint setting portion 152 so that the flight image 2 includes the route indicator 311 indicating the flight route and the flying device icon 305 indicating the flying device 2, Set a virtual viewpoint. Thus, the bird's-eye view display area 304 displayed by the display control unit 153 on the display unit 12 includes the route indicator 311 indicating the flight route and the flying device icon 305 indicating the flying device 2. By viewing the bird's-eye view display region 304, the user can grasp the direction to move the flying device 2 in order to return the flying device 2 to the flight path.”). Examiner interprets that a 3D model of the flight vehicle is suggested at least by flying device icon 305 indicating the flying device 2. Guo more explicitly teaches the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof (see at least [0101] “It should be noted that the vehicle 402 or the mobile device 104 may insert a model (e.g., 3D model) or representation of the vehicle 402 (e.g., car, drone, etc.) in a 3D surround view 416. ” also see at least [0109] and [0112]). Examiner interprets that virtual viewpoint image is encompassed at least by 3D surround view 416. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Huang with the suggested teaching of the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof found in Matsuki and the more explicit teaching of the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof found in Guo. One could combine the teaching in order to have a non-transitory computer-readable medium storing computer- readable instructions that, when executed by a computer, cause the computer to perform a method comprising: acquiring map information; acquiring current position information of a flight vehicle; acquiring information concerning a virtual viewpoint for a user to check the flight vehicle in an image; and generating a virtual viewpoint image, which is an image viewed from the virtual viewpoint, based on the map information, the current position information of the flight vehicle, and the information concerning the virtual viewpoint, the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof with a reasonable expectation of success. One would have been motivated to do so in order to improve user experience during operation of movable objects such as unmanned aerial vehicles (UAV) (see at least Huang, [0038] and [0179]) and further to improve operability of a flying device that deviates from a user’s visual field (see at least Matsuki, [0005]). Regarding claim 19, (Currently Amended) Huang teaches an information processing device comprising: circuitry that (see at least [0005] “An apparatus for generating a first person view (FPV) of an environment is provided in accordance with an aspect of the disclosure. The apparatus may comprise one or more processors”) see at least [0121] “The environmental sensing unit 616 may be configured to obtain environmental information 644 using one or more sensors, as previously described with reference to FIGS. 4 and 5. The environmental information may comprise an environmental map. The environmental map may comprise a topological map or a metric map.”); see at least [0096] “one or more of the sensors in the environmental sensing unit may be configured to provide data regarding a state of the movable object. The state information provided by a sensor can include information regarding a spatial disposition of the movable object (e.g., position information such as longitude, latitude, and/or altitude; orientation information such as roll, pitch, and/or yaw).”); see at least [0072] “the image data may be provided in a 3D virtual environment that is displayed on the user terminal (e.g., virtual reality system or augmented reality system). The 3D virtual environment may optionally correspond to a 3D map. The virtual environment may comprise a plurality of points or objects that can be manipulated by a user. The user can manipulate the points or objects through a variety of different actions in the virtual environment. Examples of those actions may include selecting one or more points or objects, drag-and-drop, translate, rotate, spin, push, pull, zoom-in, zoom-out, etc. Any type of movement action of the points or objects in a three-dimensional virtual space may be contemplated. A user may use the user terminal to manipulate the points or objects in the virtual environment to control a flight path of the UAV and/or motion characteristic(s) of the UAV. A user may also use the user terminal to manipulate the points or objects in the virtual environment to control motion characteristic(s) and/or different functions of the imaging device.”); and see at least [0159] “The FPV 932 may comprise augmented stereoscopic video data. The augmented stereoscopic video data may be generated by fusing stereoscopic video data and environmental/motion information as described elsewhere herein.” and [0130] “The motion information may also include one or more of the following: location in global or local coordinates, attitude, altitude, spatial disposition, velocity, acceleration, directional heading, distance traveled, state of battery power, and/or health of one or more components on the movable object.” and [0229] “the 3D environment may comprise a plurality of virtual objects. The virtual objects may be graphical solid objects or graphical wireframes. The virtual objects may comprise points or objects that may be of interest to a user. Points or objects that may be of less interest to the user may be omitted from the 3D virtual environment to reduce object clutter and to more clearly delineate points/objects of interest. The reduced clutter makes it easier for the user to select or identify a desired point or object of interest from the 3D virtual environment.” also see at least [0071]). Examiner interprets that information concerning a virtual viewpoint for a user to check the flight vehicle in an image is encompassed at least by user can manipulate the points or objects through a variety of different actions in the virtual environment and virtual viewpoint image is encompassed at least by augmented stereoscopic video data and/or (3D) virtual environment. Huang does not explicitly teach the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof. Matsuki suggests the virtual viewpoint image including a 3D model of the flight vehicle displayed at a periphery thereof (see at least [0065] “Further, when the flight device 2 is out of the path of flying the flying device 2, the viewpoint setting portion 152 sets the viewpoint setting portion 152 so that the flight image 2 includes the route indicator 311 indicating the flight route and the flying device icon 305 indicating the flying device 2, Set a virtual viewpoint. Thus, the bird's-eye view display area 304 displayed by the display control unit 153 on the display unit 12 includes the route indicator 311 indicating the flight route and the flying device icon 305 indicating the flying
Read full office action

Prosecution Timeline

Feb 20, 2024
Application Filed
Jul 11, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Oct 09, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600253
SYSTEMS AND METHODS FOR THE LOCALIZATION AND NAVIGATION OF A VEHICLE TO A CHARGE STATION
2y 5m to grant Granted Apr 14, 2026
Patent 12585273
CONTROL SYSTEM FOR HAULING VEHICLES
2y 5m to grant Granted Mar 24, 2026
Patent 12570320
VEHICLE FOR PERFORMING MINIMAL RISK MANEUVER DURING AUTONOMOUS DRIVING AND METHOD OF OPERATING THE VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12545227
BRAKE SERVICE MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12523015
Work Machine
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
97%
With Interview (+19.6%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 76 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month