DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA
Prior arts cited in this office action:
Goodell (US 20240416948 A1, hereinafter “Goodell”)
Foroozan et al. (US 20210117659 A1, hereinafter “Foroozan”)
Cai (CN 110969578 A, hereinafter “Cai”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 7-15, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Goodell (US 20240416948 A1, hereinafter “Goodell”) in view of Foroozan et al. (US 20210117659 A1, hereinafter “Foroozan”).
Regarding claims 1 and 11:
Goodell teaches an association matching method, comprising:
determining first reference coordinates of a first reference position in a first image on a first local map, determining second reference coordinates of a second reference position in a second image on a second local map, wherein the first image is obtained through a first image capturing device, and the second image is obtained through a second image capturing device (Goodell [0020], [0048], [0058], where Goodell teaches The map localizer 204 receives the perception data to estimate the current location of the truck 200. Using the perception data from certain sensors, the map localizer 204 generates one or more sensed maps, which the map localizer 204 compares against one or more digital maps stored in the map localizer 204 to determine where the truck 200 is in the world (as global context in a global frame of reference) and/or determine where the truck 200 is on the digital map (as local context in a local frame of reference);
mapping the first local map and the second local map to a global map, wherein at least one of the first reference coordinates of the first reference position and the second reference
coordinates of the second reference position is mapped to coordinates on the global map according to a conversion relationship (Goodell [0048], [0058], where Goodell teaches For instance, the map localizer 204 may receive the perception data from the perception module 202 and/or directly from the various sensors sensing the environment surrounding the truck 200 and generate the sensed map(s) representing the sensed environment. The map localizer 204 may correlate features of the sensed map (e.g., digital representations of the features of the sensed environment) against details on the one or more digital maps (e.g., digital representations of the features of the digital map), such that map localizer 204 aligns the sensed map with the digital map. The map localizer 204 then identifies similarities and differences of the sensed map and digital map in order to estimate the location of the truck 200.e); and
correcting the conversion relationship according to coordinates of a third reference position in at least one third image and a fourth reference position in at least one fourth image mapped to the global map through the conversion relationship, wherein the at least one third image is obtained through the first image capturing device, the at least one fourth image is obtained through the second image capturing device, and a corrected conversion relationship minimizes a miss distance between the coordinates of the third reference position mapped to the global map and the coordinates of the fourth reference position mapped to the global map (Goodell [0005], [0077], claims 1, 3, generate a sensed map based upon sensor data from the one or more sensors; obtain a base map from a non-transitory storage medium; generate a first scoring map based upon image data of the sensed map in a spatial domain overlaying the image data of the base map in the spatial domain; apply a transform function on the image data of the sensed map to generate the image data of a transformed sensed map in a frequency domain, and on the image data of the base map to generate the image data of a transformed base map in the frequency domain; generate a second scoring map in the frequency domain based upon the image data of the transformed sensed map overlaying the image data of the transformed base map; and update an estimated location of the automated vehicle, by applying at least one of the first scoring map or the second scoring map against a plurality of particles representing estimated locations of the automated vehicle).
Goodell fails to explicitly teach generating each map using a corresponding capturing device for each map.
However, Goodell teaches using a plurality of sensors to generate a plurality of maps. wherein each map generated can correspond to or be attributed to one sensor data or one sensor (Goodell [0019], [0028], [0048]). Foroozan further teaches where in For the image clusters overlay, to use the measurements from different kinds of sensors at various positions, the measurements should be transformed from their own coordinate system into some common coordinate system. Embodiments may include one or more coordinate systems such as: a camera coordinate represented by the standard pinhole model; and a radar coordinate where the radar system provides range and orientation in both angles azimuth and elevation. This information can be converted into a 3D point cloud to describe the target point. The coordinate system could also be the World coordinate system used in a suitable calibration procedure (Foroozan [0074]). In other words, each map obtain from each camera can be transform to the world/global map, in order to determine the difference and similarity between them and perform adjustment to of the local maps or local maps generator accordingly.
Therefore, taking the teaching of Goodell and Foroozan as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to generate a first map form one or more sensors and the generate a second map form one or more different set of sensors, transform and compare each map with the global or world map and correct or update the system based on the difference (error) in order to improve the system such that the location of the object in the local map can be more accurate.
Regarding claims 2 and 12:
Goodell in view of Foroozan teaches wherein steps for correcting the conversion relationship comprise:
defining a target function according to the miss distance; and
minimizing the target function to determine the corrected conversion relationship (Goodell [0040], [0079], [0087]; Foroozan [0047]-[0048], [0079]-[0080]).
Regarding claims 3 and 13:
Goodell in view of Foroozan teaches wherein the at least one third image comprises third images of a plurality of time points, the at least one fourth image comprises fourth images of the time points, and a step for defining the target function according to the miss distance comprises:
defining the target function according to a sum of the miss distance between two coordinates of the third reference position of the time points and the fourth reference position of the time points mapped to the global map (Goodell [0019], [0005], [0071][0077], claims 1, 3, fig. 4).
Regarding claims 4 and 14:
Goodell in view of Foroozan teaches wherein steps for correcting the conversion relationship comprise:
identifying an object in the at least one third image and the at least one fourth image; and
defining a position of the object in the at least one third image as the third reference position,
and defining a position of the object in the at least one fourth image as the fourth reference position (Goodell [0019], [0005], [0077], claims 1, 3, fig. 4).
Regarding claims 5 and 15:
Goodell in view of Foroozan teaches wherein steps for mapping the first local map and the second local map to the global map comprise:
defining the first local map as a reference map; and
determining the conversion relationship of the second reference coordinates converting to
the first reference coordinates on the reference map (Goodell [0019], [0005], [0071], [0077], claims 1, 3, fig. 4; (Foroozan [0074]).
Regarding claims 7 and 17:
Goodell in view of Foroozan teaches further comprising:
defining that a field of view of the first image capturing device partially overlaps a field of view of the second image capturing device (Goodell [0032]-[0033], [0084]-[0085]).
Regarding claims 8 and 18:
Goodell in view of Foroozan teaches wherein the second image capturing device is located in the field of view of the first image capturing device, and the first image capturing device is located in the field of view of the second image capturing device (Goodell [0032]).
Regarding claims 9 and 19:
Goodell in view of Foroozan teaches wherein the second image capturing device is not located in the field of view of the first image capturing device, and the first image capturing device is not located in the field of view of the second image capturing device (Goodell [0032]).
Regarding claims 10 and 20:
Goodell in view of Foroozan teaches wherein a step for determining the first reference coordinates of the first reference position in the first image on the first local map comprises:
converting pixel coordinates of the first reference position in the first image to the first
reference coordinates through a pinhole camera model (Foroozan [0074]).
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Goodell (US 20240416948 A1, hereinafter “Goodell”) in view of Foroozan et al. (US 20210117659 A1, hereinafter “Foroozan”) and in view of Cai (CN 110969578 A, hereinafter “Cai”).
Regarding claims 6 and 16:
The combination fails to explicitly teach wherein the conversion relationship is a conversion matrix, and the conversion matrix is adapted to perform at least one of rotation and translation on the coordinates
However Goodell teaches In some embodiments, the map localizer applies a transformation function (e.g., Fourier transform) on the sensor data, sensed map (or sub-map), and/or the base map, thereby generating a transformed sensed map and a transformed base map. The map localizer generates or receives the sensed map and base map in a spatial domain. The transformation function transforms the senses map and the base map to the frequency domain. The map localizer or other component of the autonomy system generates the transformed sensed map and a transformed base map by applying one or more transformation functions, such as a Fourier transformation function (e.g., Fast Fourier Transform (FFT) or Short Fourier Transform (SFT)). The autonomy system applies the transform function on the image data of the sensed map and base map, to transform the image data from the spatial domain to the frequency domain… Each particle includes, or is otherwise associated with, position information indicating the position of the particle (e.g., coordinate X, Y; geographic lat, long; yaw, rotation), as well as sensor measurements occurring at the particle. The map localizer computes and assigns each particle's position score by computing a difference metric between particle sensor measurements of the sensed map or the sub-maps compared against the sensor measurements of base maps (e.g., for both the reflectivity and height measurements) (Goodell [0066], [0068], [0077]).
Furthermore, Cai teaches method comprises the following steps: extracting two local grid map having overlapping part, obtaining two local grid map are respectively corresponding to the two-dimensional matrix, respectively performing Fourier transform matrix of generating two amplitude values for the two-dimensional matrix; using the phase correlation method for two amplitude value matrix to transform the generated for representing translation amount and rotation amount of the pulse function of between two local grid map according to the coordinate value corresponding to pulse function of obtaining the relative transformation relationship of two local grid map, then automatically splicing the two local grid map (Cai Abstract and claim 4).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to use matrix rotation and/or translation to convert one map coordinate to the other, in order to better compare them to determine the difference between them such that error can be corrected by updating the map information
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WEDNEL CADEAU/Primary Examiner, Art Unit 2632 January 28, 2026