DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 26 objected to because of the following informalities: incorrect spelling. The claim states “using an intinic matrix of the camera”, however the examiner believes this to merely be a simple mistake/typo and interpret it as “using an intrinsic matrix of the camera”. Appropriate correction is required.
Response to Arguments
Applicant's arguments with respect to claims 12-18, 20-26 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 12-17, 20 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al. (“Thomas”) (WO 2021/175,434) in view of Zedayko et al. (“Zed”) (U.S. PG Publication No. 2021/0035289) and Ziyaee et al. (“Ziy”) (U.S. Patent No. 10,860,034).
In regards to claim 12, an imaging system is shown by Thomas as seen in at least ¶0018, 0031, 0035 and 0054 wherein a single image may be captured and then used to create a birds-eye-view map representation of the image, using known parameters of the camera as described in ¶0070, and in particular ¶0073 which particularly describes that the cameras are calibrated and therefore their respective focal lengths are known, from here ¶0027-0030, 0055, 0059 and 0061 describe extracting features from the image data in order to transform the image space into said birds-eye-view space. Thomas, however, fails to specify that even though the focal lengths of the cameras are known, that the focal lengths may then be taken into account in order to normalize the image data in order to have proper consistency between cameras. In a similar endeavor Zed teaches as seen in ¶0012 that normalization of image data is done to account for image variability in order to allow for further standardized analysis, which may include color correction, color setting, lighting setting, magnification and focal length. Therefore, as seen in for example ¶0068, normalization of the image data may be done before identifying and extracting various properties and attributes of the image data as well as then calculating features of the image data. It would have been obvious to one of ordinary skill in the art to incorporate the teaching of imaging normalization as taught by Zed into Thomas because it allows for normalization to control for different image acquisition settings [such as focal length] as described in at least ¶0124, to allow for further standardized analysis as described in ¶0012.
Therefore, together Thomas and Zed teach a method for generating at least one representation of a bird's eye view of at least a part of the environment of a system (See ¶0003-0005, 0018 and 0054 of Thomas), the method comprising the following steps:
a) obtaining a digital image representation (See ¶0031 and 0035 of Thomas), which represents a single digital image with at least one camera parameter of a camera that captured the digital image (See ¶0018, 0031 and 0054 of Thomas wherein a single image may be used to create the birds-eye-view map representation of the image, while ¶0070 describes that the system uses known camera parameters for its spatial processing of information and feature mapping), wherein the at least one camera parameter of the camera includes a nominal focal length (See ¶0012, 0068 and 0124 of Zed);
a1) normalizing the digital image representation to a normalized focal length using the nominal focal length of the camera (See ¶0012, 0068 and 0124 of Zed);
b) extracting at least one feature from the normalized digital image representation (See ¶0027-0030, 0055, 0059 and 0061 of Thomas, this is taken in view of ¶0012, 0068 and 0124 of Zed wherein the image data may be normalized for consistency so that then appropriate feature extraction may then take place), wherein the at least one feature is generated in different scales (See ¶0027, 0055, 0061, 0073 and 0076 of Thomas); and
c) transforming the at least one feature from the image space into a bird's eye view space (See ¶0027-0030, 0054-0055, 0059 and 0061 of Thomas; also see at least ¶0068-0080 which describes how image-space features are translated into birds-eye-view features), so as to obtain at least one bird’s eye view feature (See at least ¶0055, 0059, 0070 and 0075 of Thomas wherein the birds-eye-view space also contains features).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zed into Thomas because it allows for standardization [normalization] of image capture data to then allow for further analysis as described in at least ¶0012.
Thomas additionally fails to teach d) after the transforming, refining the at least one bird’s eye view feature using LeakyReLU or Resnet blocks.
In a similar endeavor Ziy teaches d) after the transforming, refining the at least one bird’s eye view feature using LeakyReLU or Resnet blocks (See for example col. 11, li. 11-41 in view of FIG. 7).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Ziy into Thomas because it allows for virds-eye view maps to be further refined to a point where objects may be labeled with classifications from a set of classes as described in at least the Abstract, thus allowing for further processing and information retrieval of image data.
Although not used in the rejection, an additional reference will be provided to the applicant which also teaches the refinement of brid’s eye view models through the use of a convolutional model supplemented with ReLU [Rectified Linear Unit] layers as seen in at least ¶0053 in view of FIG. 5 of Sheu et al. (U.S. PG Publication No. 2022/0067408).
In regards to claim 13, Thomas teaches the method according to claim 12, wherein the method is performed for training a system and/or a deep learning algorithm to describe at least a part of a 3D environment around the system (See ¶0019, 0023-0024 and 0049).
In regards to claim 14, Thomas teaches the method according to claim 12, wherein the transforming of step c) includes a feature compression (See ¶0011, 0070 and FIG. 4 wherein the vertical dimension may be collapsed and bottlenecked to a certain size, thus compressing the vertical feature data).
In regards to claim 15, Thomas teaches the method according to claim 12, wherein the transforming of step c) includes a feature expansion (See FIG. 4 wherein features are expanded along the depth axis)
In regards to claim 16, Thomas teaches the method according to claim 12, wherein the transforming of step c) includes an inverse perspective mapping feature generation (See ¶0011).
In regards to claim 17, Thomas teaches the method according to claim 12, wherein the transforming of step c) includes a resampling of features (See ¶0029 and 0061 in view of FIG. 4 wherein they may be resampled).
In regards to claims 20 and 21, the claims are rejected under the same basis as claim 12 by Thomas in view of Zed and Ziy, wherein the computer readable medium and processor are taught as seen in ¶0039 and 0049.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al. (“Thomas”) (WO 2021/175,434) in view of Zedayko et al. (“Zed”) (U.S. PG Publication No. 2021/0035289) and Ziyaee et al. (“Ziy”) (U.S. Patent No. 10,860,034), in further view of Park et al. (“Park”) (U.S. PG Publication No. 2021/0406560).
In regards to claim 18, Thomas fails to teach the method according to claim 11, wherein the transforming of step c) includes a feature fusion.
In a similar endeavor Park teaches wherein the transforming of step c) includes a feature fusion (See FIG. 1 and 5 in view of ¶0006-0007).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Park into Thomas because it allows for the generation of a fused output that represents data from fields of view or sensory fields of each of the sensor type, when such cases arise as described in ¶0006.
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al. (“Thomas”) (WO 2021/175,434) in view of Zedayko et al. (“Zed”) (U.S. PG Publication No. 2021/0035289) and Ziyaee et al. (“Ziy”) (U.S. Patent No. 10,860,034), in further view of Nikitidis et al. (“Niki”) (U.S. PG Publication No. 2021/0082136).
In regards to claim 22, Thomas fails to teach the system according to claim 21, further comprising a module for feature refinement.
In a similar endeavor Niki teaches further comprising a module for feature refinement (See ¶0099).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Niki into Thomas because it allows for further consideration of features for depth information extraction and fine-tuning of the neural network as described in at least ¶0099.
Claim(s) 23-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al. (“Thomas”) (WO 2021/175,434) in view of Zedayko et al. (“Zed”) (U.S. PG Publication No. 2021/0035289) and Ziyaee et al. (“Ziy”) (U.S. Patent No. 10,860,034), in further view of Lin et al. (U.S. PG Publication No. 2023/0144678).
In regards to claim 23, Thomas fails to explicitly teach the system according to claim 21, further comprising: outputting a representation of a semantic segmentation map and a representation of an elevation map with estimated object elevations, wherein each of the maps is in a bird’s eye view.
That is, although Thomas teaches elevation representations on the bird’s eye view as seen in ¶025, 0055-0056 and 0069, an additional prior art will be provided which shows the use of a semantic segmentation along with an elevation map.
In a similar endeavor Lin teaches further comprising:
outputting a representation of a semantic segmentation map (See ¶0007-0010) and a representation of an elevation map with estimated object elevations (See ¶0008-0010), wherein each of the maps is in a bird’s eye view (See ¶0007-0010).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Lin into Thomas because it allows for visual and accurate projection of 3D points onto the bird’s-eye plane as described in at least ¶0009.
In regards to claim 24, the claim is rejected under the same basis as claim 23 by Thomas in view of Zed and Ziy, in further view of Lin.
In regards to claim 25, the claim is rejected under the same basis as claim 23 by Thomas in view of Zed and Ziy, in further view of Lin.
Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al. (“Thomas”) (WO 2021/175,434) in view of Zedayko et al. (“Zed”) (U.S. PG Publication No. 2021/0035289) and Ziyaee et al. (“Ziy”) (U.S. Patent No. 10,860,034), in further view of Wu et al. (“Wu”) (U.S. PG Publication No. 2010/0259371).
In regards to claim 26, Thomas fails to teach the method according to claim 12, wherein the transforming of step c) includes resampling the at least one bird’s eye view feature using an intrinic matrix of the camera.
In a similar endeavor Wu teaches wherein the transforming of step c) includes resampling the at least one bird’s eye view feature using an intrinic matrix of the camera (See ¶0024-0026 with reference to Equation 1 in view of FIG. 3 and 4 wherein a conversion/transforming between the image representation and a bird’s-eye view representation includes consideration of the camera’s intrinsic matrix).
It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Wu into Thomas because it allows for the conversion to provide better visual effect for judging distance and assistance as described in at least ¶0028-0030 by at least taking into consideration the intrinsic parameters of a camera.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDEMIO NAVAS JR whose telephone number is (571)270-1067. The examiner can normally be reached M-F, ~ 9 AM -6 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EDEMIO NAVAS JR
Primary Examiner
Art Unit 2483
/EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483