Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The United States Patent & Trademark Office appreciates the application that is submitted by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below.
Claim Status
Claims 1-3, 5-6, 8 and 10 are rejected under 35 USC § 102 in view Peng.
Claim 4 is rejected under 35 USC § 103 in view Peng, in view Mamani.
Claim 11 is rejected under 35 USC § 103 in view Peng, in view Poppe.
Claim 12 is rejected under 35 USC § 103 in view Peng, in view Poppe, in view Mamani.
Claims 7 and 9 is objected.
Claim Objections
Claim 1-12 is/are objected to because of informalities. The examiner recommends the following changes.
Claim 1, line(s) 7, discloses “for each of the plurality of locations.” The claim uses “locations” but the preceding limitation recites “points.” “Locations” lacks antecedent basis.
Claims 2-12 depend either directly or indirectly from the objection of claim 1, therefore they are also objected.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5-6, 8 and 10 are rejected under 35 U.S.C. 102 as being anticipated by Peng et al. (Peng, Jianqing, et al. "Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion." IEEE, published 2018, hereinafter Peng)
CLAIM 1
PNG
media_image1.png
270
791
media_image1.png
Greyscale
Regarding claim 1, Peng teaches a method (Peng, abstract: “a method based on the 3D laser radar and stereo-vision information is proposed to measure the pose and estimate the motion for non-cooperative targets”) of approximating a non-earth imaging (NEI) camera model for a non-earth sensor (Peng, see FIG. 1, sensors are mounted on a satellite, and imaging target is a space non-cooperative object), the method comprising: obtaining a physical camera model (Peng, page 3011, right col, see reconstructed text below);
receiving a target range value (Peng, page 3010, left col, second paragraph: “Taking China’s SinoSat-2 as an example... In the whole space on-orbit service process, this paper focuses on the final tracking phase, that is, the Euler distance between the target and the service star is between 1m∼5m. As shown in Fig. 1, … the non-cooperative target satisfies the requirements of 3D reconstruction within the measurement range”; page 3011, left col, fourth paragraph: “… determining whether the relative distance d between the target and the service star is less than the distance threshold δd=5(m) , and if d≤δd , enabling stereo-vision and Lidar data fusion instructions;”); determining a plurality of space-based coordinates (Peng, page 3011, left col, fourth paragraph: “(2) Simultaneously acquiring the binocular image of the non-cooperative target and the point cloud data of the laser radar”) of a plurality of points about a line of sight of the sensor in proximity to a distance corresponding to the received target range value (Peng, page 3011, left col, fourth paragraph: “… determining whether the relative distance d between the target and the service star is less than the distance threshold δd=5(m) , and if d≤δd , enabling stereo-vision and Lidar data fusion instructions”; see FIG. 4 with annotations below, 3D coordinates of an object is acquired, the object is at the end of sight of lines, from a distance less than 5m, of the sensors);
PNG
media_image2.png
1031
1582
media_image2.png
Greyscale
PNG
media_image3.png
256
873
media_image3.png
Greyscale
for each of the plurality of locations, determining line and sample coordinates on an image thereof via the physical camera model based on the space-based coordinates of the plurality of points (Peng, page 3010, right col, last paragraph, see reconstructed text below);
and based on the determined line and sample coordinates for each of the plurality of points, fitting an approximate camera model thereto. (Peng, page 3011, right col, see reconstructed text below. The Examiner notes the transformation matrix corresponds to the
PNG
media_image4.png
640
863
media_image4.png
Greyscale
approximate camera model)
CLAIM 2
Regarding claim 2, Peng teaches the method of Claim 1. In addition, Peng teaches the sensor comprises an image capture device. (Peng, see FIG. 3 and FIG. 10; page 3016, left col: “3D laser radar (VelodyneVLP-16) … a binocular camera (GS3-U3-41C6C) …”, the system has 2 cameras that capture 2D images and one lidar camera that captures 3D point cloud data)
CLAIM 3
PNG
media_image1.png
270
791
media_image1.png
Greyscale
PNG
media_image5.png
683
878
media_image5.png
Greyscale
Regarding claim 3, Peng teaches the method of Claim 1. In addition, Peng teaches receiving the physical camera model comprises receiving a physics-based camera model that comprises a plurality of sensor-based parameters. (Peng, page 3011, right col; page 3016, left col; see reconstructed text below)
CLAIM 5
Regarding claim 5, Peng teaches the method of Claim 1. In addition, Peng teaches receiving a distance at which a target is located along a line of sight of the sensor; and
determining a distance of a portion of space along a line of sight of the sensor. (Peng, page 3011, left col, fourth paragraph: “… determining whether the relative distance d between the target and the service star is less than the distance threshold δd=5(m), and if d≤δd, enabling stereo-vision and Lidar data fusion instructions”. The Examiner notes distance threshold δd corresponds to the “received distance”, and relative distance d corresponds to the “determined distance”. The Examiner interprets “a portion of space along a line of sight of the sensor” as the space between the sensor and the target)
CLAIM 6
Regarding claim 6, Peng teaches the method of Claim 1. In addition, Peng teaches determining the plurality of space-based coordinates of the plurality of points comprises determining the plurality of space-based coordinates thereof inside a volume enveloping an intersection of the line of sight of the sensor and an end of the target range value opposite the image capture device. (Peng, see FIG. 4. Coordinates of the target’s surface are captured by the Lidar sensor; the Examiner notes the surface of target contains a volume of target object, the volume envelope the intersection of the sensor’s light of sight and the distance between sensor and target)
CLAIM 8
PNG
media_image4.png
640
863
media_image4.png
Greyscale
Regarding claim 8, Peng teaches the method of Claim 1. In addition, Peng teaches fitting the approximate camera model to the plurality of points and their corresponding line and sample coordinates comprises determining one or more parameters of the approximate camera model. (Peng, page 3011, right col, see reconstructed text below; the Examiner notes m11-m34 of matrix M corresponds to “parameters of approximate camera model”)
CLAIM 10
Regarding claim 10, Peng teaches A method of estimating space-based coordinates of a target based on an image thereof (Peng, page 3016-3017, section B, subsection 1) 3D Point Cloud Reconstruction Based on Stereo-Vision: “Here, a three-dimensional reconstruction system based on stereo-vision is established.”), the method comprising:
PNG
media_image3.png
256
873
media_image3.png
Greyscale
obtaining the image of the target in space, the image including line and sample coordinates of the target thereon; (Peng, page 3010, right col, last paragraph, see reconstructed text below);
receiving a target range value value (Peng, page 3010, left col, second paragraph: “Taking China’s SinoSat-2 as an example... In the whole space on-orbit service process, this paper focuses on the final tracking phase, that is, the Euler distance between the target and the service star is between 1m∼5m. As shown in Fig. 1, … the non-cooperative target satisfies the requirements of 3D reconstruction within the measurement range”; page 3011, left col, fourth paragraph: “… determining whether the relative distance d between the target and the service star is less than the distance threshold δd=5(m) , and if d≤δd , enabling stereo-vision and Lidar data fusion instructions;”); and
based on an application of the approximate camera model of claim 1 to the line and sample coordinates of the target in the image and to the target range, (Peng, page 3011, right col, see reconstructed text below.)
PNG
media_image4.png
640
863
media_image4.png
Greyscale
obtaining corresponding three-dimensional coordinates of the target in space. (section B, subsection 1): “The satellite model is reconstructed using the algorithm described in the paper. The camera imaging results is shown in Fig. 11. The feature point matching results at different positions are shown in Fig. 12 and Fig. 14. The 3D reconstruction point cloud map corresponding to different positions is shown in Fig. 13 and Fig. 15, respectively”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
CLAIM 4
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Peng in view of Mamani et al. (Mamani, Jaime Gerson Cuba, and Pablo Raúl Yanyachi Aco Cardenas. "LiDAR small satellite for space debris location and attitude determination." IEEE, published 2019, hereinafter Mamani).
PNG
media_image8.png
440
864
media_image8.png
Greyscale
Regarding Claim 4, Peng teaches the method of Claim 1. In addition, Peng teaches the plurality of sensor-based parameters comprise at least one of: distortion model of sensor optics (Peng, page 3016, left col, last paragraph, see reconstructed text below);
PNG
media_image9.png
256
864
media_image9.png
Greyscale
and a model of a focal plane geometry of the sensor. (Peng, page 3011, right col, see reconstructed text below)
Peng does not explicitly disclose sensor ephemeris data; sensor attitude.
Mamani is in the same field of art of utilizing satellite lidar sensor to detect and track space debris. Further, Mamani teaches sensor ephemeris data; sensor attitude. (Mamani, page 3, left col, last paragraph: “An adaptation of a typical LiDAR system is needed to achieve the desired result. The development consists of three main components: the GNSS system (which provides the position information of the small satellite), the attitude determination system and the laser payload system (which provides range information from the spacecraft to the space debris) and the reflection intensity of each laser point measured… This adaptation results in an equation of three coordinate systems, which are: the GNSS and orbit determination system, the laser beam coordinate system and the payload coordinate system.”)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Peng by incorporating the GNSS and attitude system that is taught by Mamani, to make a satellite lidar system that can locate ; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need for a real-time space debris location system in practical application of such system (Mamani, page 3, third paragraph: “The GNSS and orbit determination system plays a big role in the LiDAR space debris location (LSDL). Since it is needed a real-time estimation, GNSS data and orbit determination of the spacecraft data must be computed onboard. The velocity and position of Earth-orbiting objects can be predicted, from the TLE, by using the SGP model.”).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
CLAIM 11
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Peng in view of Poppe et al. (Mauricio Poppe “Transformation matrix for projection of 3D objects into a 2D plane (projection transform)”, mauriciopoppe.com, a 02/13/2024 archived copy is attached, hereinafter Poppe).
In regards to Claim 1, Peng teaches A method of estimating line and sample coordinates of a target on an image based on space-based coordinates. (Peng, page 3011, right col, see reconstructed text below.)
PNG
media_image4.png
640
863
media_image4.png
Greyscale
obtaining space-based coordinates of the target in space (Peng, page 3011, left col, fourth paragraph: “(2) Simultaneously acquiring the binocular image of the non-cooperative target and the point cloud data of the laser radar”);
Peng does not explicitly disclose based on an application of the approximate camera model of claim 1 to the space-based coordinates of the target, obtaining corresponding line and sample coordinates of the target in the image thereof.
Poppe is in the same field of art of processing point cloud data. Further, Poppe teaches based on an application of the approximate camera model of claim 1 to the space-based coordinates of the target, obtaining corresponding line and sample coordinates of the target in the image thereof. (Poppe, page 1: “In Computer Graphics 3D objects created in an abstract 3d world will eventually need to be displayed, to view these objects in a 2d plane like a screen objects will need to be projected from the 3D space to the 2D plane with a transformation matrix.”)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Peng by incorporating method to transform 3D coordinate data to 2D coordinate data that is taught by Poppe, to make a lidar imaging system that can transform 3D model to 2D images; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to visualize 3D data (Poppe, page 1: “In Computer Graphics 3D objects created in an abstract 3d world will eventually need to be displayed, to view these objects in a 2d plane like a screen objects will need to be projected from the 3D space to the 2D plane with a transformation matrix.”).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
CLAIM 12
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Peng in view of Poppe, and further in view of Mamani.
Regarding Claim 12, the combination of Peng and Poppe teaches the method of Claim 11.
The combination of Peng and Poppe does not explicitly disclose obtaining the space based coordinates of the target comprises obtaining position data of the target.
Mamani is in the same field of art of utilizing satellite lidar sensor to detect and track space debris. Further, Mamani teaches obtaining the space based coordinates of the target comprises obtaining position data of the target. (Mamani, page 3, section C, second paragraph: “An adaptation of a typical LiDAR system is needed to achieve the desired result. The development consists of three main components: the GNSS system (which provides the position information of the small satellite), the attitude determination system and the laser payload system (which provides range information from the spacecraft to the space debris) and the reflection intensity of each laser point measured”; page 4, left col, second paragraph: “As the spacecraft position - velocity vector and laser range vector have been obtained, we can get the space debris position and velocity vector”)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Peng by incorporating the GNSS and attitude system that is taught by Mamani, to make a satellite lidar system that can locate ; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need for a real-time space debris location system in practical application of such system (Mamani, page 3, third paragraph: “The GNSS and orbit determination system plays a big role in the LiDAR space debris location (LSDL). Since it is needed a real-time estimation, GNSS data and orbit determination of the spacecraft data must be computed onboard. The velocity and position of Earth-orbiting objects can be predicted, from the TLE, by using the SGP model.”).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Allowable Subject Matter
Claims 7 and 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The closest prior arts for Claim 7 and 9 are:
Peng et al. (Peng, Jianqing, et al. "Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion." IEEE) which is directed to a method to measure the pose and estimate the motion for non-cooperative targets based on the 3D laser radar and stereo-vision information.
Li et al. (Li, Peng, et al. "A pose measurement method of a non-cooperative spacecraft based on point cloud feature." IEEE) which is directed to an autonomous measurement method for relative position and attitude of space non-cooperative spacecraft.
Comellini et al. (Comellini, Anthea, et al. "Vision-based navigation for autonomous space rendezvous with non-cooperative targets." IEEE) which is directed to a vision-based navigation method for space rendezvous with non-cooperative targets.
Peng, Li and Comellini all teach imaging and perform pose measurement of non-cooperative targets, with the target size is only in millimeter range in all three of references. None teaches “the volume is within a range of about 25 meters to about 100 meters of an ephemeris data of the target.”
Pertinent Arts
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Chen et al. (CN-116518941-A, a translated copy is attached) which is directed to an aircraft target positioning method and system based on a satellite-borne bidirectional swing scanning imaging system.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHUT HUY (JEREMY) PHAM whose telephone number is (703)756-5797. The examiner can normally be reached Mo - Fr. 8:30am - 6pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O'Neal Mistry can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NHUT HUY PHAM/Examiner, Art Unit 2674
/Ross Varndell/Primary Examiner, Art Unit 2674