DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 4th, 2026, has been entered.
Status of Claims
Amended: 1, 12 and 23
Cancelled: 2 – 3, 11, 13 – 14, 18 – 19, 21 – 22 and 25
Rejected: 1, 4 – 10, 12, 15 – 17, 20 and 23 – 24
Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Response to Arguments
Applicant’s arguments on pages 7 – 12 have been fully considered but are not persuasive. In summary, the applicant argues that Troy does not meet the limitations because it describes moving the rack and pinion system vertically oriented instead of moving along the width of the UAV, as described in the limitations. However, the examiner argues that they are functionally the same. A UAV is able to tilt in any direction. Therefore, if the UAV described in Troy is flying parallel to the ground the rack and pinion moves vertically, however, if the UAV flies perpendicular to the ground the rack and pinion moves horizontally. Therefore, while Troy may be physically different than the limitations, it is functionally the same.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 8, 10, 12, 15 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Troy et al. (US Pub No: 2021/0394902 A1, hereinafter Troy) in view of Wilcox (US Patent No: 10,499,037 B1, hereinafter Wilcox) and Mehta et al. (US Pub No: 2021/0195103 A1, hereinafter Mehta).
Regarding Claim 1:
Troy discloses:
An aerial vehicle comprising: a frame. Paragraph [0057] describes a UAV 2 that includes a body frame 4.
a camera assembly attached to the frame, the camera assembly comprising a cascaded bi-directional linear actuator comprising a motor and a rack-and-pinion system comprising a gear rack along a length of the aerial vehicle, and a camera attached to the cascaded bi- directional linear actuator such that the camera is configured to be moved to different positions linearly along the gear rack by the rack-and-pinion system of the cascaded bi-directional linear actuator based on a rotational motion of the motor to capture a plurality of images of an object. Paragraph [0057] and figure 3A describes a rack-and-pinion arrangement that is coupled to a linear guide to constrain the parallel racks. Paragraph [0018] describes a marking module connected to the rack-and-pinion. Paragraph [0116] describes a video camera 30 connected to the marking device 24 which is connected to the rack-and-pinion. Paragraph [0060] describes linear actuators in the form of a rack-and-pinion.
*Note: while Troy does not fully disclose the limitation, particularly, that the rack and pinion system is along the length of the aerial vehicle, it is functionally the same. If the UAV described in Troy is flying parallel to the ground the rack and pinion moves vertically, however, if the UAV flies perpendicular to the ground the rack and pinion moves horizontally. It is a flipped version of what is described in the limitations.
a processor in electrical connection to the camera such that the processor is configured to determine one or more dimensions of the object based on the plurality of images. Paragraph [0139] describes a processor that receives the measured motion characteristics of the UAV to determine a control signal.
a flight system configured to move the aerial vehicle. Paragraph [0139] describes a processor that receives the measured motion characteristics of the UAV to determine a control signal.to adjust a motion characteristic.
and an energy system in electrical connection to the camera assembly, the processor, and the flight system. Paragraph [0090] describes an electric power system 134 that includes a battery and associated electronics for providing electric power.
the camera assembly comprising a cascaded bi-directional linear actuator and a camera attached to the cascaded bi- directional linear actuator such that the camera is configured to be moved to different positions by the cascaded bi-directional linear actuator to capture a plurality of images of an object. Paragraph [0105] describes a drive rotation of a pinion gear that allows the camera system to move in a x-direction and is driven by an actuator. Therefore, the actuator is bi-directional.
Troy does not disclose a depth between a camera and a reference region of an object and a camera being a single monocular camera.
Wilcox teaches:
wherein the processor is configured to determine a depth between the camera and a reference region of the object. Column 10, lines 18 - 53 describes a processor 908. Column 3, lines 30 - 38 describes that the first and second camera can be aligned to provide 3D images. Additionally, claim 5 describes a depth sensor.
the camera being a single monocular camera. Column 3, line 6 – 29 describes a first camera 102A and the second camera 102B that simulate binocular vision. That means each camera is monocular.
Therefore, it would have been prima facie obvious to one of the ordinary skill in the art before the effective filing date, with a reasonable expectation for success, to have modified Troy to incorporate the teachings of Wilcox to show a depth between a camera and a reference region of an object. One would have been motivated to do so to extract 3D information from various objects and because monocular cameras are cheap and versatile (Column 3, line 6 – 29 of Wilcox).
Troy does not disclose determining a distance between adjacent positions of the camera to capture successive images of the plurality of images based on the depth between the camera and a reference region of the object and an overlap percentage between successive images.
Mehta teaches:
wherein the processor is further configured to determine a distance between adjacent positions of the camera to capture successive images of the plurality of images based on the depth between the camera and a reference region of the object. Paragraph [0025] describes a single camera that is configured to capture images. A sequence of overlapping images is taken.
and a desired overlap percentage between the successive images. Paragraph [0101] describes a desired overlap for image acquisition about the standard roll, pitch and yaw axis for image construction.
Therefore, it would have been prima facie obvious to one of the ordinary skill in the art before the effective filing date, with a reasonable expectation for success, to have modified Troy to incorporate the teachings of Mehta to show determining a distance between adjacent positions of the camera to capture successive images of the plurality of images based on the depth between the camera and a reference region of the object and an overlap percentage between successive images. One would have been motivated to do so to ensure maximum coverage of the ground area ([0012] of Mehta).
Claims 12 and 23 are substantially similar to claim 1 and are rejected on the same grounds.
Regarding Claim 4:
Mehta teaches:
The aerial vehicle according to claim 2, wherein the processor is configured to stitch the plurality of images to form a stitched image. Paragraph [0101] describes a desired overlap for image acquisition about the standard roll, pitch and yaw axis for image construction. The images are then stitched together to create the desired 3D image reconstruction.
The motivation to combine Mehta with Wilcox is for the same reason as in claim 1.
Claim 15 is substantially similar to claim 4 and is rejected on the same grounds.
Regarding Claim 8:
Troy discloses:
The aerial vehicle according to claim 1, wherein the flight system comprises a propulsion system configured to propel the aerial vehicle, and a flight controller configured to control the propulsion system to move in a desired direction. Paragraph [0084] describes a flight controller 32 that includes a computer 36 and a plurality of motor controllers 35. These control the rotor motors of a UAV that dictate lift, drag, etc…
Regarding Claim 10:
Troy discloses:
The aerial vehicle according to claim 1, wherein the energy system comprises a power system configured to be electrically coupled to a power tethering unit. Paragraph [0090] describes an electric power system 134 that includes a battery and associated electronics for providing electric power.
Claim(s) 5 – 7 and 16 – 17 are rejected under 35 U.S.C. 103 as being unpatentable over Troy in view of Wilcox and Mehta and further in view of Holzer et al. (US Pub No: 2018/0199025 A1, hereinafter Holzer).
Regarding Claim 5:
Troy, Wilcox and Mehta teach the above limitations in claim 1. Troy, Wilcox and Mehta do not teach a semantic segmentation neural network.
Holzer teaches:
The aerial vehicle according to claim 4, wherein the processor is configured to determine one or more targeted regions in the stitched image using an image semantic segmentation neural network. Paragraph [0066] describes a semantic segmentation with neural networks.
Therefore, it would have been prima facie obvious to one of the ordinary skill in the art before the effective filing date, with a reasonable expectation for success, to have modified Troy to incorporate the teachings of Holzer to show a semantic segmentation neural network. One would have been motivated to do so to improve AI performance and data because they are clear and intuitive, allow AI to infer new facts, flexible and have contextual understanding.
Claim 16 is substantially similar to claim 5 and is rejected on the same grounds.
Regarding Claim 6:
Wilcox discloses:
The aerial vehicle according to claim 5, wherein the processor is configured to determine a depth between the camera and each targeted region of the one or more targeted regions based on images of the plurality of images capturing the one or more targeted regions. Column 10, lines 18 - 53 describes a processor 908. Column 3, lines 30 - 38 describes that the first and second camera can be aligned to provided 3D images. Additionally, claim 5 describes a depth sensor.
The reason to combine Wilcox with Troy is for the same reason as in claim 1.
Claim 17 is substantially similar to claim 6 and is rejected on the same grounds.
Regarding Claim 7:
Wilcox and Mehta teach:
The aerial vehicle according to claim 6, wherein the processor is configured to determine one or more dimensions of the object in the stitched image; wherein the processor is configured to determine one or more dimensions of the reference region in the stitched image; and wherein the one or more dimensions of the object are determined based on the depth between the camera and each targeted region of the one or more targeted regions, the one or more dimensions of the object in the stitched image; the one or more dimensions of the reference region in the stitched image, one or more dimensions of the reference region in the object; and the depth between the camera and the reference region. Column 10, lines 18 - 53 of Wilcox describes a processor 908. Column 3, lines 30 - 38 of Wilcox describes that the first and second camera can be aligned to provided 3D images. Additionally, claim 5 describes a depth sensor. Paragraph [0101] of Mehta describes a desired overlap for image acquisition about the standard roll, pitch and yaw axis for image construction. The images are then stitched together to create the desired 3D image reconstruction.
The reason to combine Wilcox and Mehta with Troy is for the same reason as in claim 1.
Claim(s) 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Troy in view of Wilcox and Mehta and further in view of Bry et al. (US Pub No: 2021/0214068 A1, hereinafter Bry).
Regarding Claim 9:
Troy, Wilcox and Mehta teach the above limitations in claim 1. Troy, Wilcox and Mehta do not teach an illumination system to provide illumination to the object.
Bry teaches:
The aerial vehicle according to claim 1, further comprising: an illumination system configured to provide illumination to the object. Paragraph [0094] describes a UAV configured to include powered illumination sources to emit light into the surrounding environment.
Therefore, it would have been prima facie obvious to one of the ordinary skill in the art before the effective filing date, with a reasonable expectation for success, to have modified Troy to incorporate the teachings of Bry to provide illumination to the object. One would have been motivated to do so to prevent the UAV from operating in low light levels ([0094] of Bry).
Claim 20 is substantially similar to claim 9 and is rejected on the same grounds.
Claim(s) 24 is rejected under 35 U.S.C. 103 as being unpatentable over Troy in view of Wilcox and Mehta and further in view of Dutoit (US Pub No: 2009/0084048 A1, hereinafter Dutoit).
Regarding Claim 24:
Troy, Wilcox and Mehta teach the above limitations in claim 1. Troy, Wilcox and Mehta do not teach an object being a rail viaduct bearing within a cavity.
Dutioit teaches:
The method according to claim 23, wherein the object is a rail viaduct bearing, and wherein the rail viaduct bearing is within a cavity formed by a rail viaduct and a structural pier. Paragraph [0016] describes a load-bearing for each of a slab of an elevated railway track having viaducts.
Therefore, it would have been prima facie obvious to one of the ordinary skill in the art before the effective filing date, with a reasonable expectation for success, to have modified Wilcox to incorporate the teachings of Dutoit to show a r an object being a rail viaduct bearing within a cavity. One would have been motivated to do so because an aerial vehicle can identify objects on the ground.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Alfrasheed (US Pub No: 2022/0169381 A1): A system and methodology for launching, flying and perching on a cylindrically curved surface in an environment without human intervention. The system and methodology include an environment awareness sensor device suite having a depth camera arranged to capture and output image data and 3D point cloud data of a field of view; an asset targeting unit arranged to set an asset as a destination location for a landing; a trajectory path determiner arranged to calculate a trajectory path to the destination location; a flight controller arranged to launch and fly the autonomous aerial vehicle to the destination location according to the trajectory path; a situational status determiner arranged to, in real-time, predict a location of an object with respect to the autonomous aerial vehicle based on 3D point cloud data for the object, determine the object is the asset based on a confidence score and autonomously land on the asset.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY KHANDPUR whose telephone number is (571)272-5090. The examiner can normally be reached Monday - Friday 8:30 - 6:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAY KHANDPUR/Primary Patent Examiner, Art Unit 3658