Prosecution Insights
Last updated: April 19, 2026
Application No. 18/804,164

IMAGE GENERATION APPARATUS, IMAGE GENERATION METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §103§112
Filed
Aug 14, 2024
Examiner
IVEY, DANA DESHAWN
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jvckenwood Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
683 granted / 762 resolved
+37.6% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
44 currently pending
Career history
806
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
27.9%
-12.1% vs TC avg
§102
42.1%
+2.1% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 762 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . *Examiner Note: Claim language is bolded. Cited References are italicized. Examiner interpretations are preceded with an asterisk *. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “An image generation apparatus” in at least claim 1 “a data acquisition unit” in at least claim 1 “a first image generation unit” in at least claim 1 “a second image generation unit” in at least claim 1 “a user information acquisition unit” in at least claim 1 “a mobile body control unit” in at least claim 1 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Specifically, the image generation apparatus, data acquisition unit, a first image generation unit, a second image generation unit are interpreted as a CPU; and the user information acquisition unit and a mobile body control unit are interpreted as a camera and controller for moving/propelling the drones, respectively, with reference to Applicant’s published PGPUB 20240400201A1: “An image generation apparatus” in claim 1 is recited as “the image generation apparatus 31 may be constituted with an arithmetic circuit such as a central processing unit (CPU)” in para. [0037] of Applicant’s specification. “a data acquisition unit” in claim 1 is recited as “the data acquisition unit 51 that acquires image data of the target T from the second cameras (imaging devices) 26, 27, 28, and 29 mounted on the drones (mobile bodies)” in para. [0100] of Applicant’s specification. “a first image generation unit” in claim 1 is recited as “The first image generation unit 52 and the second image generation unit 53 may also be constituted by an arithmetic circuit such as a CPU.” in para. [0051] of Applicant’s specification. Based on paragraph, [0051], this element is being interpreted as a CPU on the drone. “a second image generation unit” in claim 1 is recited as “The first image generation unit 52 and the second image generation unit 53 may also be constituted by an arithmetic circuit such as a CPU” in para. [0051] of Applicant’s specification. “a user information acquisition unit” in claim 1 is recited as “the first camera (a user information acquisition unit) 25” in para. [0029] of Applicant’s specification. “a mobile body control unit” in claim 1 is recited as “the drone control units (mobile body control units) 30A and 30B that respectively control moving positions of the drones “ in para. [0102] of Applicant’s specification. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 1, it is unclear how or in what sense the “user information acquisition unit” (camera 25) and the “mobile body control unit” are located/included within the image generation apparatus. Fig. 3 illustrates the user information acquisition unit 25 and the mobile body control unit 30 located within the drone while Fig. 4 illustrates the user information acquisition unit 25 located outside of the image generation apparatus. This is unclear and confusing because since para. [0102] of Applicant’s specification describes “The image generation apparatus according to at least one embodiment is provided with … the first camera (user information acquisition unit) 25 … and the drone control units (mobile body control units) 30A and 30B that respectively control moving positions of the drones 11A and 11B”. When considered in light of the specification, this recitation of “an image generation apparatus comprising: … a user information acquisition unit” and the “mobile body control unit” appears to be inaccurate because while the drone comprises a user information acquisition unit/camera and mobile body control unit, the image generation apparatus is not illustrated to comprises a user information acquisition unit/camera. It is not clear how the image generation apparatus comprises these additional elements when it appears as though it is the drone that comprises everything and not the image apparatus which is just CPU. Fig. 3 and Fig. 4 illustrate that the camera and mobile body control unit is not actually included in the image generation apparatus, but is actually a part of the drone and it is not clear how the configuration operates together. Because the relationship is ambiguous, the claim fails to particularly point out and distinctly claim the invention. For the purpose of prosecution, Examiner has interpreted the image generation apparatus as including the entirety of the structures and to be on the drone as discussed below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2 and 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Henry (US 2021/0263515A1) in view of Kamalakantha (US 2018/0314251A1) and further in view of Cao (US 2020/0115047 A1). Regarding claim 1, Henry discloses An image generation apparatus comprising: a data acquisition unit (Fig. 4, 404 with 410 with 420 and see at least para. [0063] of Henry which discloses “The computer-readable media 404 may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information, such as computer-readable instructions, data structures, program modules, or other code and data” and see at least para. [0065] of Henry which discloses “the computer-readable media 404 may store, at least temporarily captured images 410, sensor data 412, one or more 3D models 414, one or more scan plans 416, and navigation/tracking information 418”, *Examiner interprets this to be a unit that acquires data and can store it, i.e. a data acquisition unit. In this connection, the Examiner also interprets receiver 420 combined with the stored images 410 to be examples of data storage that is acquired by the data acquisition unit) configured to acquire image data (see at least para. [0058] of Henry which discloses “the lower- resolution image sensors 212, 214, 216, 302, 304, and 306 may enable visual inertial odometry (VIO) for high resolution localization and obstacle detection and avoidance. Further, some or all of the image sensors 208, 212, 214, 216, 302, 304, and 306 may be used to scan a scan target to obtain range data“, *Examiner interprets this as acquiring image data of a target) of a target (Fig. 1, 110 and see at least para. [0045] of Henry which discloses “the scan target 110 for performing the scan of the scan target 110. In this example, the scan target 110 includes a building 112“) from a plurality of imaging devices (Fig. 2, 208/212/214/216 and see at least para. [0056] of Henry which discloses “a plurality of lower- resolution image sensors 212, 214, and 216 that are spaced out around the topside 218 of the body 220 of the UAV 102” and see at least para. [0055] of Henry which discloses “the image sensors include a high-resolution image sensor 208 that is mounted on a gimbal 210 to support steady, low-blur image capture and object tracking”) mounted on a mobile body (Fig. 1, 102 and see at least para. [0041] of Henry which discloses “an unmanned aerial vehicle (UAV) 102”, *Examiner interprets UAV 102 to be a mobile body since para. [0022] of Applicant’s specification describes “a drone (mobile body) 11, which is an unmanned aerial vehicle”); a first image generation unit (see at least para. [0040] of Henry which discloses “some example implementations are described for configuring a UAV to autonomously scan a scan target, such as for capturing images and/or generating a 3D model”) configured to generate a three-dimensional image of the target based on the image data (see at least para. [0045] of Henry which discloses “The processor(s) onboard the UAV 102 may be configured by executable instructions to generate, receive, or otherwise access a lower-resolution 3D model (not shown in FIG. 1) of the scan target 110 for performing the scan of the scan target 110 … The lower-resolution 3D model generated or otherwise accessed by the UAV 102 may include a set of points in 3D space corresponding to surfaces of the scan target 110 detected during an initial scan of the scan target 110”, *Examiner interprets this to be examples of generation of 3D image of the target); a second image generation unit (Fig. 6, 602 and see at least para. [0110] of Henry which discloses “the user interface 602 may be configured to suggest a polygon or other 2D shape 1002 …the user interface generates an image of the polygon or other 2D shape 1002”) configured to generate a two-dimensional image (Fig. 10, 1002 and see at least para. [0031] of Henry which discloses “obtain an image of the scan target, and the user may create a polygon or other 2D shape on the image of the scan target” and see at least para. [0110] of Henry which discloses “the user interface generates an image of the polygon or other 2D shape 1002”) of the target (Fig. 10, 110 and see at least para. [0109] of Henry which discloses “obtain an image of the scan target 110, which may be presented in the user interface 602” and see at least para. [0110] of Henry which discloses “the user interface generates an image of the polygon or other 2D shape 1002”) as viewed from a designated specific viewpoint (see at least para. [0048] of Henry which discloses “The UAV 102 may be configured to systematically cover the points of the lower-resolution 3D model of the scan target 110 with the fields of view (FOVs)”, *Examiner interprets this as evidence of a specific viewpoint) using the three-dimensional image; a user information acquisition unit (see at least para. [0076] of Henry which discloses “the image sensors 208, 212, 214, 216, 302, 304, 306 may be cameras and may include one or more optical sensors for capturing images (including still images and/or video). In some implementations, the UAV 102 may include some cameras 422 dedicated for image capture of a scan target and other cameras dedicated for image capture for visual navigation” and see at least para. [0042] of Henry which discloses “a display associated with the controller 104”, *Examiner interprets some of these sensors to be a user information acquisition unit since para. [0029] of Applicant’s specification describes “the first camera (a user information acquisition unit) 25, the second cameras (imaging devices) 26, 27, 28, and 29” and para. [0030] of Applicant’s specification describes “The user information acquisition unit is not limited to the first camera 25. User information acquired by the display apparatus 12”); a user (Fig. 1, 108 and see at least para. [0041] of Henry which discloses “a user 108”) who operates the mobile bodies (see at least para. [0042] of Henry which discloses “the user 108 may cause the controller 104 to issue commands to the UAV 102, e.g., “takeoff”, “land”, “follow”, such as via one or more virtual controls presented in a graphical user interface (GUI) and/or via one or more physical controls”, *Examiner interprets these commands will help the user operate the mobile bodies 102); and a mobile body control unit (Fig. 4, 402 and see at least para. [0061] of Henry which discloses “the UAV 102 may be controlled autonomously by one or more onboard processors 402 that execute one or more executable programs” and see at least para. [0030] of Henry which discloses “one or more processors onboard the UAV may be configured by program code or other executable instructions to perform the autonomous operations described herein. For instance, the one or more processors may control and navigate the UAV along an intended flight path” and see at least para. [0046] of Henry which discloses “the processor(s) may control the propulsion mechanism to cause the UAV 102 to fly to assume the pose corresponding to the pose for traversing the sequence of poses of the scan plan”) configured to control a moving position of the mobile body based on position information of the mobile body (see at least para. [0120] of Henry which discloses “the controller 104 to enable the UAV 102 to assume several poses to capture distance (range) information for one or more surfaces of the scan target 110 to generate the lower- resolution 3D model”), and defined by a plurality of lines (see at least para. [0216] of Henry which discloses “the normals may be virtual lines extending outward from the points of the lower-resolution 3D model generally perpendicular to a surface of the scan target on which the respective points are located”, *Examiner interprets these virtual lines to be similar to a plurality of connection lines that start at points on the 3D model (target) with each line representing a vector extending outward which is the type of geometric line claimed between a points); a vertex (see at least para. [0200] of Henry which describes “mesh vertices” and “that a mesh 3D model of the scan target has been generated”) of the target to the imaging devices (see at least para. [0200] of Henry which discloses “that a mesh 3D model of the scan target has been generated, the higher-resolution 3D model generating program 1706 may be executed to select a polygon from the mesh, determine one or more captured images that match the location of the selected polygon for determining a texture to apply to the selected polygon, and may apply the determined texture to the face of the selected polygon. These steps may be repeated for each polygon in the mesh 3D model. The higher-resolution 3D model generating program 1706 may also perform a refinement phase for the mesh vertices to line up image content and/or textures between the plurality of polygons and views to make the mesh 3D model more consistent”, *Examiner interprets the “select a polygon from the mesh, determine one or more captured images” to be evidence of the vertex of the target to the imaging devices). Henry may not explicitly disclose the acquisition unit will acquire position information and line-of-sight information of a user and a mobile body control unit configured to control a moving position of the mobile body based on position information of the mobile body, and the position information and the line-of-sight information of the user, wherein the mobile body control unit controls a moving position of the mobile body such that the user is positioned in a region and a plurality of connection lines connecting a vertex. However, in the same field of endeavor, Kamalakantha discloses acquire position information and line-of-sight information (see at least para. [0020] of Kamalakantha which discloses “line-of-sight determination in one step. For example, the device can send out a beacon that can be received by the drone 110 if there is line-of-sight (e.g., an infrared beacon). If the drone 110 receives the signal, then there is line-of-sight”) of a user (Fig. 1, 152 and see at least para. [0018] of Kamalakantha which discloses “the drone 110 has a line-of-sight with the respective devices 150 and/or the associated human 152. For example, the drone 110 may use the GPS or beacon technology to determine the general location of the device and/or human and then use a sensor 128 (e.g., an image sensor, an infrared sensor, etc.) to confirm that there is a line-of sight between the drone 110 and the respective device and/or human” and see at least para. [0021] of Kamalakantha which discloses “The first purpose is to track the respective humans (e.g., to ensure a location of the humans 152 are within particular parameters). The second purpose is to ensure that drones 110 are within line-of-sight of at least one human”) and a mobile body control unit (see at least para. [0014] of Kamalakantha which discloses “a navigation engine 120 can be used to control movement of the drone 110” and see at least para. [0045] of Kamalakantha which discloses “the drone 400 can receive other control instructions from a control unit”) configured to control a moving position of the mobile body based on position information of the mobile body (see at least para. [0014] of Kamalakantha which discloses “a drone capable of tracking humans based on devices and control the drone to be within line-of-sight of at least one of the devices”), and the position information and the line-of-sight information of the user, wherein the mobile body control unit (see at least para. [0022] of Kamalakantha which discloses “if the line-of sight engine 122 determines that there is no line-of-sight, the drone 110 can be controlled via the navigation engine 120 to become within line-of-sight of at least one human” and see at least para. [0015] of Kamalakantha which discloses “Line-of-sight can be defined by criteria used by the drone. The system 200 can include drone 110 as well as drones 210 a-210 m, the devices 150 associated with respective humans 152 a-152 n, a drone control device 160 to control one or more of the drones 110, 210, and a platform 170 that can provide services based on information provided by the drones 110, 210 and/or devices 150 a-150 n”) controls a moving position of the mobile body such that the user is positioned in a region (see at least para. [0022] of Kamalakantha which discloses “if the line-of-sight engine 122 determines that there is no line-of-sight, the drone 110 can be controlled via the navigation engine 120 to become within line-of-sight of at least one human”, *Examiner interprets being within the line-of sight to be equivalent to the user being positioned in a region). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the acquisition unit of Henry to acquire position information and line-of-sight information of a user and a mobile body control unit configured to control a moving position of the mobile body based on position information of the mobile body, and the position information and the line-of-sight information of the user, wherein the mobile body control unit controls a moving position of the mobile body such that the user is positioned in a region, as taught in Kamalakantha with a reasonable expectation of success in order to more effectively control the mobile body’s movement to maintain the user in a defined region relative to the target and imaging devices so that the drones may easily fly such that the imaging devices are positioned at specific positions with respect to the user, thereby reducing and/or eliminating occlusion. See para. [0020]-[0022] of Kamalakantha for motivation. Henry, as modified by Kamalakantha may not explicitly disclose a plurality of connection lines connecting a vertex. However, in the same field of endeavor, Cao discloses a plurality of connection lines connecting a vertex of the target to the imaging devices (see at least para. [0094] of Cao which discloses “Then, initialize a target airspace range of UAV movement, i.e., the UAVs are only allowed to move within the airspace. Then, construct the UAV topology G according to the airspace positions of the UAVs. Specifically, each UAV in the UAV set J can be considered as a vertex in the diagram G. Assuming that V(G) is a set of vertexes in the UAV topology diagram G, and if there is a connectable space-to-space link between any two vertexes in the UAV network topology, then there is a connection line between the two vertexes. The edges of the UAV network topology are constructed by all of the connection lines, and assume that E(G) is the set of the edges of the UAV network topology. Further, for any two vertexes j, j′∈V(G), if there is a path from j to j′ in the UAV network topology G, the diagram G is referred to as a connected diagram”, *Examiner interprets j to be the UAVs on which the imaging devices are mounted). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the apparatus of Henry, as modified by Kamalakantha to further include a plurality of connection lines connecting a vertex, as taught in Cao with a reasonable expectation of success in order to further optimize imaging and control according to line of sigh geometry to ensure the user remains within a defined region bounded by the connection lines. See para. [0094] of Cao for motivation. Regarding claim 2, Henry, as modified by Kamalakantha and Cao discloses wherein the imaging devices (Fig. 2, 208/212/214/216 and see at least para. [0056] of Henry which discloses “a plurality of lower- resolution image sensors 212, 214, and 216 that are spaced out around the topside 218 of the body 220 of the UAV 102” and see at least para. [0055] of Henry which discloses “the image sensors include a high-resolution image sensor 208 that is mounted on a gimbal 210 to support steady, low-blur image capture and object tracking”) are mounted on a plurality of the mobile bodies (see at least para. [0053] of Henry which discloses “multiple UAVs 102 may be used to scan the scan target 110 concurrently. For instance, the multiple UAVs 102 may communicate with each other to divide up the scan target 110 into portions, and may each scan a designated portion of the scan target 110”). Regarding claim 5, Henry, as modified by Kamalakantha and Cao discloses wherein the position information (see at least para. [0120] of Henry which discloses “the controller 104 to enable the UAV 102 to assume several poses to capture distance (range) information for one or more surfaces of the scan target 110 to generate the lower- resolution 3D model”) and the line-of-sight information of the user are acquired by imaging the user using at least one of the imaging devices (see at least para. [0022] of Kamalakantha which discloses “if the line-of sight engine 122 determines that there is no line-of-sight, the drone 110 can be controlled via the navigation engine 120 to become within line-of-sight of at least one human” and see at least para. [0015] of Kamalakantha which discloses “Line-of-sight can be defined by criteria used by the drone. The system 200 can include drone 110 as well as drones 210 a-210 m, the devices 150 associated with respective humans 152 a-152 n, a drone control device 160 to control one or more of the drones 110, 210, and a platform 170 that can provide services based on information provided by the drones 110, 210 and/or devices 150 a-150 n”). Regarding claim 6, Henry discloses An image generation method (see at least para. [0027] of Henry which discloses “capturing a series of images, capturing video, or the like, of the indicated scan target in a thorough and repeatable manner, such as for generating a high resolution 3D map or other 3D model of the scan target”) comprising: acquiring image data (see at least para. [0058] of Henry which discloses “the lower- resolution image sensors 212, 214, 216, 302, 304, and 306 may enable visual inertial odometry (VIO) for high resolution localization and obstacle detection and avoidance. Further, some or all of the image sensors 208, 212, 214, 216, 302, 304, and 306 may be used to scan a scan target to obtain range data“, *Examiner interprets this as acquiring image data of a target) of a target (Fig. 1, 110 and see at least para. [0045] of Henry which discloses “the scan target 110 for performing the scan of the scan target 110. In this example, the scan target 110 includes a building 112“) from a plurality of imaging devices (Fig. 2, 208/212/214/216 and see at least para. [0056] of Henry which discloses “a plurality of lower- resolution image sensors 212, 214, and 216 that are spaced out around the topside 218 of the body 220 of the UAV 102” and see at least para. [0055] of Henry which discloses “the image sensors include a high-resolution image sensor 208 that is mounted on a gimbal 210 to support steady, low-blur image capture and object tracking”) mounted on a mobile body (Fig. 1, 102 and see at least para. [0041] of Henry which discloses “an unmanned aerial vehicle (UAV) 102”, *Examiner interprets UAV 102 to be a mobile body since para. [0022] of Applicant’s specification describes “a drone (mobile body) 11, which is an unmanned aerial vehicle”); generating a three-dimensional image of the target based on the image data (see at least para. [0045] of Henry which discloses “The processor(s) onboard the UAV 102 may be configured by executable instructions to generate, receive, or otherwise access a lower-resolution 3D model (not shown in FIG. 1) of the scan target 110 for performing the scan of the scan target 110 … The lower-resolution 3D model generated or otherwise accessed by the UAV 102 may include a set of points in 3D space corresponding to surfaces of the scan target 110 detected during an initial scan of the scan target 110”, *Examiner interprets this to be examples of generation of 3D image of the target); generating a two-dimensional image (Fig. 10, 1002 and see at least para. [0031] of Henry which discloses “obtain an image of the scan target, and the user may create a polygon or other 2D shape on the image of the scan target” and see at least para. [0110] of Henry which discloses “the user interface generates an image of the polygon or other 2D shape 1002”) of the target (Fig. 10, 110 and see at least para. [0109] of Henry which discloses “obtain an image of the scan target 110, which may be presented in the user interface 602” and see at least para. [0110] of Henry which discloses “the user interface generates an image of the polygon or other 2D shape 1002”) as viewed from a designated specific viewpoint (see at least para. [0048] of Henry which discloses “The UAV 102 may be configured to systematically cover the points of the lower-resolution 3D model of the scan target 110 with the fields of view (FOVs)”, *Examiner interprets this as evidence of a specific viewpoint) using the three-dimensional image; a user (Fig. 1, 108 and see at least para. [0041] of Henry which discloses “a user 108”) who operates the mobile body (see at least para. [0042] of Henry which discloses “the user 108 may cause the controller 104 to issue commands to the UAV 102, e.g., “takeoff”, “land”, “follow”, such as via one or more virtual controls presented in a graphical user interface (GUI) and/or via one or more physical controls”, *Examiner interprets these commands will help the user operate the mobile bodies 102); and controlling a position of the mobile body based on position information of the mobile body (see at least para. [0120] of Henry which discloses “the controller 104 to enable the UAV 102 to assume several poses to capture distance (range) information for one or more surfaces of the scan target 110 to generate the lower- resolution 3D model”), and defined by a plurality of lines (see at least para. [0216] of Henry which discloses “the normals may be virtual lines extending outward from the points of the lower-resolution 3D model generally perpendicular to a surface of the scan target on which the respective points are located”, *Examiner interprets these virtual lines to be similar to a plurality of connection lines that start at points on the 3D model (target) with each line representing a vector extending outward which is the type of geometric line claimed between a points); a vertex (see at least para. [0200] of Henry which describes “mesh vertices” and “that a mesh 3D model of the scan target has been generated”) of the target to the imaging devices (see at least para. [0200] of Henry which discloses “that a mesh 3D model of the scan target has been generated, the higher-resolution 3D model generating program 1706 may be executed to select a polygon from the mesh, determine one or more captured images that match the location of the selected polygon for determining a texture to apply to the selected polygon, and may apply the determined texture to the face of the selected polygon. These steps may be repeated for each polygon in the mesh 3D model. The higher-resolution 3D model generating program 1706 may also perform a refinement phase for the mesh vertices to line up image content and/or textures between the plurality of polygons and views to make the mesh 3D model more consistent”, *Examiner interprets the “select a polygon from the mesh, determine one or more captured images” to be evidence of the vertex of the target to the imaging devices). Henry may not explicitly disclose acquiring position information and line-of-sight information of a user; and controlling a position of the mobile body based on position information of the mobile body, and the position information and the line-of-sight information of the user, wherein the controlling includes controlling a moving position of the mobile body such that the user is positioned in a region and a plurality of connection lines connecting a vertex. However, in the same field of endeavor Kamalakantha discloses acquiring position information and line-of-sight information (see at least para. [0020] of Kamalakantha which discloses “line-of-sight determination in one step. For example, the device can send out a beacon that can be received by the drone 110 if there is line-of-sight (e.g., an infrared beacon). If the drone 110 receives the signal, then there is line-of-sight”) of a user (Fig. 1, 152 and see at least para. [0018] of Kamalakantha which discloses “the drone 110 has a line-of-sight with the respective devices 150 and/or the associated human 152. For example, the drone 110 may use the GPS or beacon technology to determine the general location of the device and/or human and then use a sensor 128 (e.g., an image sensor, an infrared sensor, etc.) to confirm that there is a line-of-sight between the drone 110 and the respective device and/or human” and see at least para. [0021] of Kamalakantha which discloses “The first purpose is to track the respective humans (e.g., to ensure a location of the humans 152 are within particular parameters). The second purpose is to ensure that drones 110 are within line-of-sight of at least one human”); controlling a position of the mobile body based on position information of the mobile body (see at least para. [0014] of Kamalakantha which discloses “a drone capable of tracking humans based on devices and control the drone to be within line-of-sight of at least one of the devices”) and the position information and the line-of-sight information of the user, wherein the controlling (see at least para. [0022] of Kamalakantha which discloses “if the line-of-sight engine 122 determines that there is no line-of-sight, the drone 110 can be controlled via the navigation engine 120 to become within line-of-sight of at least one human” and see at least para. [0015] of Kamalakantha which discloses “Line-of-sight can be defined by criteria used by the drone. The system 200 can include drone 110 as well as drones 210 a-210 m, the devices 150 associated with respective humans 152 a-152 n, a drone control device 160 to control one or more of the drones 110, 210, and a platform 170 that can provide services based on information provided by the drones 110, 210, and/or devices 150 a-150 n”) includes controlling a moving position of the mobile body such that the user is positioned in a region (see at least para. [0022] of Kamalakantha which discloses “if the line-of-sight engine 122 determines that there is no line-of-sight, the drone 110 can be controlled via the navigation engine 120 to become within line-of-sight of at least one human” , *Examiner interprets being within the line-of sight to be equivalent to the user being positioned in a region) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the acquisition unit of Henry to include acquiring position information and line-of-sight information of a user; and controlling a position of the mobile body based on position information of the mobile body and the position information and the line-of-sight information of the user, wherein the controlling includes controlling a moving position of the mobile body such that the user is positioned in a region, as taught in Kamalakantha with a reasonable expectation of success in order to more effectively control the mobile body’s movement to maintain the user in a defined region relative to the target and imaging devices so that the drones may easily fly such that the imaging devices are positioned at specific positions with respect to the user, thereby reducing and/or eliminating occlusion. See para. [0020]-[0022] of Kamalakantha for motivation. Henry, as modified by Kamalakantha may not explicitly disclose a plurality of connection lines connecting a vertex. However, in the same field of endeavor, Cao discloses a plurality of connection lines connecting a vertex of the target to the imaging devices (see at least para. [0094] of Cao which discloses “Then, initialize a target airspace range of UAV movement, i.e., the UAVs are only allowed to move within the airspace. Then, construct the UAV topology G according to the airspace positions of the UAVs. Specifically, each UAV in the UAV set J can be considered as a vertex in the diagram G. Assuming that V(G) is a set of vertexes in the UAV topology diagram G, and if there is a connectable space-to-space link between any two vertexes in the UAV network topology, then there is a connection line between the two vertexes. The edges of the UAV network topology are constructed by all of the connection lines, and assume that E(G) is the set of the edges of the UAV network topology. Further, for any two vertexes j, j′∈V(G), if there is a path from j to j′ in the UAV network topology G, the diagram G is referred to as a connected diagram”, *Examiner interprets j to be the UAVs on which the imaging devices are mounted). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the apparatus of Henry, as modified by Kamalakantha to further include a plurality of connection lines connecting a vertex, as taught in Cao with a reasonable expectation of success in order to further optimize imaging and control according to line of sigh geometry to ensure the user remains within a defined region bounded by the connection lines. See para. [0094] of Cao for motivation. Regarding claim 7, Henry discloses A non-transitory computer-readable storage medium storing a program (see at least para. [0063] of Henry which discloses “the computer-readable media 404 may be a type of computer-readable storage media and/or may be a tangible non-transitory media”) causing a computer (see at least para. [0159] of Henry which discloses “the scanning program 408 and the vehicle control program 406 may be implemented as instructions stored in memory or other computer readable media 404 and executable by the one or more processors 402”) to perform: acquiring image data (see at least para. [0058] of Henry which discloses “the lower- resolution image sensors 212, 214, 216, 302, 304, and 306 may enable visual inertial odometry (VIO) for high resolution localization and obstacle detection and avoidance. Further, some or all of the image sensors 208, 212, 214, 216, 302, 304, and 306 may be used to scan a scan target to obtain range data“, *Examiner interprets this as acquiring image data of a target) of a target (Fig. 1, 110 and see at least para. [0045] of Henry which discloses “the scan target 110 for performing the scan of the scan target 110. In this example, the scan target 110 includes a building 112“) from a plurality of imaging devices (Fig. 2, 208/212/214/216 and see at least para. [0056] of Henry which discloses “a plurality of lower- resolution image sensors 212, 214, and 216 that are spaced out around the topside 218 of the body 220 of the UAV 102” and see at least para. [0055] of Henry which discloses “the image sensors include a high-resolution image sensor 208 that is mounted on a gimbal 210 to support steady, low-blur image capture and object tracking”) mounted on a mobile body (Fig. 1, 102 and see at least para. [0041] of Henry which discloses “an unmanned aerial vehicle (UAV) 102”, *Examiner interprets UAV 102 to be a mobile body since para. [0022] of Applicant’s specification describes “a drone (mobile body) 11, which is an unmanned aerial vehicle”); generating a three-dimensional image of the target based on the image data (see at least para. [0045] of Henry which discloses “The processor(s) onboard the UAV 102 may be configured by executable instructions to generate, receive, or otherwise access a lower-resolution 3D model (not shown in FIG. 1) of the scan target 110 for performing the scan of the scan target 110 … The lower-resolution 3D model generated or otherwise accessed by the UAV 102 may include a set of points in 3D space corresponding to surfaces of the scan target 110 detected during an initial scan of the scan target 110”, *Examiner interprets this to be examples of generation of 3D image of the target); generating a two-dimensional image (Fig. 10, 1002 and see at least para. [0031] of Henry which discloses “obtain an image of the scan target, and the user may create a polygon or other 2D shape on the image of the scan target” and see at least para. [0110] of Henry which discloses “the user interface generates an image of the polygon or other 2D shape 1002”) of the target (Fig. 10, 110 and see at least para. [0109] of Henry which discloses “obtain an image of the scan target 110, which may be presented in the user interface 602” and see at least para. [0110] of Henry which discloses “the user interface generates an image of the polygon or other 2D shape 1002”) as viewed from a designated specific viewpoint (see at least para. [0048] of Henry which discloses “The UAV 102 may be configured to systematically cover the points of the lower-resolution 3D model of the scan target 110 with the fields of view (FOVs)”, *Examiner interprets this as evidence of a specific viewpoint) using the three-dimensional image; a user (Fig. 1, 108 and see at least para. [0041] of Henry which discloses “a user 108”) who operates the mobile bodies (see at least para. [0042] of Henry which discloses “the user 108 may cause the controller 104 to issue commands to the UAV 102, e.g., “takeoff”, “land”, “follow”, such as via one or more virtual controls presented in a graphical user interface (GUI) and/or via one or more physical controls”, *Examiner interprets these commands will help the user operate the mobile bodies 102); and controlling a position of the mobile body based on position information of the mobile body (see at least para. [0120] of Henry which discloses “the controller 104 to enable the UAV 102 to assume several poses to capture distance (range) information for one or more surfaces of the scan target 110 to generate the lower- resolution 3D model”), and defined by a plurality of lines (see at least para. [0216] of Henry which discloses “the normals may be virtual lines extending outward from the points of the lower-resolution 3D model generally perpendicular to a surface of the scan target on which the respective points are located”, *Examiner interprets these virtual lines to be similar to a plurality of connection lines that start at points on the 3D model (target) with each line representing a vector extending outward which is the type of geometric line claimed between a points) ; a vertex (see at least para. [0200] of Henry which describes “mesh vertices” and “that a mesh 3D model of the scan target has been generated”) of the target to the imaging devices (see at least para. [0200] of Henry which discloses “that a mesh 3D model of the scan target has been generated, the higher-resolution 3D model generating program 1706 may be executed to select a polygon from the mesh, determine one or more captured images that match the location of the selected polygon for determining a texture to apply to the selected polygon, and may apply the determined texture to the face of the selected polygon. These steps may be repeated for each polygon in the mesh 3D model. The higher-resolution 3D model generating program 1706 may also perform a refinement phase for the mesh vertices to line up image content and/or textures between the plurality of polygons and views to make the mesh 3D model more consistent”, *Examiner interprets the “select a polygon from the mesh, determine one or more captured images” to be evidence of the vertex of the target to the imaging devices). Henry may not explicitly disclose acquiring position information and line-of-sight information of a user; and controlling a position of the mobile body based on position information of the mobile body, and the position information and the line-of-sight information of the user, wherein the controlling includes controlling a moving position of the mobile body such that the user is positioned in a region and a plurality of connection lines connecting a vertex. However, in the same field of endeavor, Kamalakantha discloses acquiring position information and line-of-sight information (see at least para. [0020] of Kamalakantha which discloses “line-of-sight determination in one step. For example, the device can send out a beacon that can be received by the drone 110 if there is line-of-sight (e.g., an infrared beacon). If the drone 110 receives the signal, then there is line-of-sight”) of a user (Fig. 1, 152 and see at least para. [0018] of Kamalakantha which discloses “the drone 110 has a line-of-sight with the respective devices 150 and/or the associated human 152. For example, the drone 110 may use the GPS or beacon technology to determine the general location of the device and/or human and then use a sensor 128 (e.g., an image sensor, an infrared sensor, etc.) to confirm that there is a line-of sight between the drone 110 and the respective device and/or human” and see at least para. [0021] of Kamalakantha which discloses “The first purpose is to track the respective humans (e.g., to ensure a location of the humans 152 are within particular parameters). The second purpose is to ensure that drones 110 are within line-of-sight of at least one human”); and controlling a position of the mobile body based on position information of the mobile body (see at least para. [0014] of Kamalakantha which discloses “a drone capable of tracking humans based on devices and control the drone to be within line-of-sight of at least one of the devices”), and the position information and the line-of-sight information of the user, wherein the controlling (see at least para. [0022] of Kamalakantha which discloses “if the line-of-sight engine 122 determines that there is no line-of-sight, the drone 110 can be controlled via the navigation engine 120 to become within line-of-sight of at least one human” and see at least para. [0015] of Kamalakantha which discloses “Line-of sight can be defined by criteria used by the drone. The system 200 can include drone 110 as well as drones 210 a-210 m, the devices 150 associated with respective humans 152 a-152 n, a drone control device 160 to control one or more of the drones 110, 210, and a platform 170 that can provide services based on information provided by the drones 110, 210 and/or devices 150 a-150 n”) includes controlling a moving position of the mobile body such that the user is positioned in a region (see at least para. [0022] of Kamalakantha which discloses “if the line-of-sight engine 122 determines that there is no line-of-sight, the drone 110 can be controlled via the navigation engine 120 to become within line-of-sight of at least one human” , *Examiner interprets being within the line-of sight to be equivalent to the user being positioned in a region) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the acquisition unit of Henry to include acquiring position information and line-of-sight information of a user; and controlling a position of the mobile body based on position information of the mobile body, and the position information and the line-of-sight information of the user, wherein the controlling includes controlling a moving position of the mobile body such that the user is positioned in a region, as taught in Kamalakantha with a reasonable expectation of success in order to more effectively control the mobile body’s movement to maintain the user in a defined region relative to the target and imaging devices so that the drones may easily fly such that the imaging devices are positioned at specific positions with respect to the user, thereby reducing and/or eliminating occlusion. See para. [0020]-[0022] of Kamalakantha for motivation. Henry, as modified by Kamalakantha may not explicitly disclose a plurality of connection lines connecting a vertex. However, in the same field of endeavor, Cao discloses a plurality of connection lines connecting a vertex of the target to the imaging devices (see at least para. [0094] of Cao which discloses “Then, initialize a target airspace range of UAV movement, i.e., the UAVs are only allowed to move within the airspace. Then, construct the UAV topology G according to the airspace positions of the UAVs. Specifically, each UAV in the UAV set J can be considered as a vertex in the diagram G. Assuming that V(G) is a set of vertexes in the UAV topology diagram G, and if there is a connectable space-to-space link between any two vertexes in the UAV network topology, then there is a connection line between the two vertexes
Read full office action

Prosecution Timeline

Aug 14, 2024
Application Filed
Nov 03, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582033
SYSTEMS AND METHODS FOR AUTOMATED GRAIN CART UNLOADING
2y 5m to grant Granted Mar 24, 2026
Patent 12384422
AUTONOMOUS DRIVING CONTROL APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Aug 12, 2025
Patent 12365385
VEHICLE DRIFT CONTROL METHOD AND APPARATUS, VEHICLE, STORAGE MEDIUM AND CHIP
2y 5m to grant Granted Jul 22, 2025
Patent 12344308
A VEHICLE STRUCTURE
2y 5m to grant Granted Jul 01, 2025
Patent 12344323
VEHICLE AERODYNAMIC IMPROVEMENT APPARATUS AND SYSTEM
2y 5m to grant Granted Jul 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
97%
With Interview (+7.3%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 762 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month