Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 3 is objected to because of the following informalities: synthetic image is repeated twice in line 5, which appears to be a typo. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 6 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 6 recites the limitation "said one-dimensional filter" in Line 4-5. There is insufficient antecedent basis for this limitation in the claim. Possibly, the intent would have been for claim 6 to depend on claim 5.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 10, and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Jean (US 2009/0175128 A1; related search report) and Hansen (2011, IEEE Trans. on Geoscience).
Regarding claim 1, Jean teaches a computer-implemented method for detecting objects subject to penumbra effects, the method comprising:
receiving a series of distance measurements generated, from a plurality of respectively different positions, by a sonar detection system comprising a synthetic antenna, the detection system operating by [[title] sonar imaging system with a synthetic aperture; [0007] recurrence corresponds to a time interval starting with emission of an acoustical signal during a short emission-duration, followed with reception of the corresponding echo … antenna relative position changing in relation to the object to be detected i.e. the echo source]:
emitting a wave [[abstract] sonar provided with emission elements];
receiving waves reflected by the environment [[abstract] sonar provided with … reception elements; [0059] synthesis of the KxN signals coming from the different transducers allows forming an image of the type shadows-and-echoes of the seafloor.];
determining distances by computing differences between the time of emission of the wave and the times of reception of the reflected waves [[0007] recurrence corresponds to a time interval starting with emission of an acoustical signal during a short emission-duration, followed with reception of the corresponding echo … antenna relative position changing in relation to the object to be detected i.e. the echo source; [0023] range of 300 m to be obtained; [0060] source M being observed under different angles. This mainly enables the vertical position of source M to be determined, which corresponds to a bathymetric or topographic information about the seafloor; [prior art claim 11] [prior art claim 13]];
Jean does not explicitly teach and yet Hansen teaches generating, based on said series of distance measurements, a synthetic image using said synthetic antenna [[abstract] synthetic aperture sonar (SAS) is emerging as an imaging technology that can provide centimeter resolution over hundreds of metres range on the seafloor. Although the principle of SAS has been known for more than 30 years, SAS systems have only recently become commercially available.], the synthetic image representing the distances of the environment from a reference position [[fig. 8] shows distances r, r0, r1 for synthetic aperture imaging];
for each focusing distance of a plurality of focusing distances [[pg. 3, col. 1] R is the range, is the wavelength at center frequency, d is the along-track element size in the array and L is the array length.]:
generating, based on said series of distance measurements or said synthetic image, a synthetic image focused at said focusing distance by applying penumbra effect compensation [[pg. 2] possible enhancements are target enhancement using autofocus [18, chapter 4], [13]; shadow enhancement using fixed focusing [19], [20]; multi-aspect imagery [16]; and SAS interferometry in high resolution [15].].
It would have been understood by a person having ordinary skill in the art prior to the effective filing date of the invention that the synthetic aperture sonar as taught by Jean, would be used for distance ranging as taught by Hansen so that focusing may be used to enhance imaging of a target (Hansen) [[pg. 2]].
Regarding claim 10, Jean teaches the method of claim 1, comprising defining a region of interest of the synthetic image, wherein the steps of: generating, based on said series of distance measurements [[0005] sonar device], a synthetic image focused at said focusing distance by applying penumbra effect compensation [[abstract] synthetic antenna sonar system]; and detecting the presence of an object in said focused synthetic image are carried out only in the region of interest [[0007] object to be detected, i.e. the echo source; [0026] a step of K insonifications of an area to be imaged].
Regarding claim 13, Jean teaches a computer program product comprising computer code instructions that, when the program is executed on a computer, cause said computer to execute the method as claimed in claim 1 [[0044] portion of the sonar system comprises a calculator. this calculator can be a PC-type computer comprising a calculating unit or processor; [0063] first micronavigation algorithm 310 implemented in the form of a software the instructions thereof are stored into storage means 216 of computer 206 is then executed by processor 215].
Regarding claim 14, Jean teaches a data processing system comprising a processor configured to implement the method as claimed in claim 1 [[0044][0063]].
Regarding claim 15, Jean teaches a computer-readable recording medium comprising instructions that, when they are executed by a computer, cause said computer to implement the method as claimed in claim 1 [[0044][0063]].
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Jean (US 2009/175128 A1) and Hansen (2011, IEEE Trans. on Geoscience) as applied to claim 1 above, and further in view of Vera (2009, EURASIP).
Regarding claim 2, Jean does not explicitly teach and yet Vera teaches the method as claimed in claim 1, wherein detecting the presence of an object in said focused synthetic image comprises applying a supervised machine learning engine trained with a learning base comprising focused images of shadows of objects of the same type as said object [[abstract] new supervised classification approach for automated target recognition (ATR) in SAS images … number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features; [pg. 2, col. 1] recognition procedure starts with a novel detection/segmentation stage based on the Hilbert transform[8], which partitions the image into highlights and shadow areas in order to estimate the most likely position of the target. A number of geometrical features are then extracted around the estimated target position, and are then used to classify the object against a previously compiled database of target and nontarget features].
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the synthetic aperture sonar as taught by Jean, with the supervised machine learning that considers shadows as taught by Vera so that shadow regions in the image are identified so that regions that are shadow only in a part of the synthetic aperture are accounted for (Vera) [[pg. 2, col. 2]].
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Jean (US 2009/175128 A1) and Hansen (2011, IEEE Trans. on Geoscience) as applied to claim 1 above, and further in view of Lopera (2012, IEEE).
Regarding claim 3, Jean does not explicitly teach and yet Lopera teaches the method as claimed in claim 1 comprising: prior to the detection: computing, for each pixel of the focused synthetic image, a ratio between the intensities of the pixel in the synthetic image and the synthetic image [[sec. shadow based features][sec. low backscatter features] describes various shadow based features calculated by ratios]; thresholding the pixels of the synthetic image for which this ratio is greater than a threshold [[sec. iii segmentation] we propose a method that makes use of fuzzy sets in order to create a soft threshold to be used in the segmentation of the image into shadow and echo pixels, which takes into account the pixel’s neighbourhood as well as intensity level. Fuzzy sets are used in combination with morphological filtering to better define and extract the pixels belonging to one of the two classes considered here.]; applying a mathematical morphology operation to the thresholded pixels [[sec. iii segmentation] second step is based on morphological filtering. It is defined by two actions, opening and closing, which are based on two operators, erosion and dilatation]; applying said detection to the output of said mathematical morphology operation [[sec. iv classification features] we consider here three regions for the feature extraction. Two of them have been already calculated following the fuzzymorpho-technique described in the previous chapter, i.e., shadow and echo. The third region we use is the area in between the two previous classes, which corresponds to a lowbackscatter area, i.e., the area in the sonar image between the shadow and the echo classes].
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the synthetic aperture sonar as taught by Jean, with the thresholding and morphological filtering as taught by Lopera because the computational time to distinguish between echo and shadow pixels is improved by a factor of 5 (Lopera) [[sec. iii. segmentation]].
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Jean (US 2009/175128 A1) and Hansen (2011, IEEE Trans. on Geoscience) as applied to claim 1 above, and further in view of Gilmour (US 3,950,723 A).
Regarding claim 4, Jean does not explicitly teach and yet Gilmour teaches the method as claimed in claim 1, wherein the plurality of focusing distances comprises a plurality of initial focusing distances defined by a first distance pitch over a first range of focusing distances [[prior art claim 1] focusing said receiver beams at a predetermined initial range and continuously electronically varying said focus after each said transmission], the method comprising: a step of defining a plurality of refined focusing distances, which are defined by: a second range of focusing distances, narrower than the first, around a first focusing distance of said plurality of initial focusing distances, at which the presence of an object has been detected [[fig. 5a and 5b] depicts different focusing points having wider or narrower beams at different distances P1, P2, P3]; a second distance pitch, smaller than the first; for each focusing distance from among said plurality of refined focusing distances, said generating, based on said series of distance measurements [[abstract] high resolution side-looking sonar system wherein the focus is electronically varied with range such that any and all returns are in focus]
Jean teaches a synthetic image focused at said focusing distance by applying penumbra effect compensation [[pg. 2] possible enhancements are target enhancement using autofocus [18, chapter 4], [13]; shadow enhancement using fixed focusing [19], [20]; multi-aspect imagery [16]; and SAS interferometry in high resolution [15].].
It would have been obvious to combine the adjusting of range as taught by Gilmour, with the target enhancement and shadow enhancement using focusing as taught by Jean so that high resolution may be maintained at different distance ranges (Gilmour) [[abstract]].
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Jean (US 2009/175128 A1) and Hansen (2011, IEEE Trans. on Geoscience) as applied to claim 1 above, and further in view of Schweizer (US 5,214,744 A).
Regarding 5, Jean does not explicitly teach and yet Schweizer teaches the method of claim 1, wherein the step of generating, based on said synthetic image, a synthetic image focused at said focusing distance by applying penumbra effect compensation is carried out by applying a one-dimensional filter to the synthetic image [[col. 3:25-30] FIG. 5 shows a functional flow diagram for the highlight-shadow matched filter algorithm which is one of the three processes used in the target detection process; [col. 2:15-30] detection process designated as the shadow highlight detector scans a filter over the sonar image to match highlight, shadow, and background representations in the filter with those in the underlying sonar image; [col. 6:25-45] next at block 44 we convolve a series of two dimensional filters with a log transformed image … these filters are separable and each can be implemented with one convolution in the row direction and one convolution in the column direction.; [col. 8:40-55] highlight-shadow matched filter 97 are merged together into a final detection window 98.].
It would have been obvious to combine target enhancement and shadow enhancement using focusing as taught by Jean, with the matched filtering as taught by Schweizer so that highlights and shadows are detected (Schweizer) [[col. 2:15-30]].
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Jean (US 2009/175128 A1) and Hansen (2011, IEEE Trans. on Geoscience) as applied to claim 1 above, and further in view of Urbano (US 2007/0161904 A1).
Regarding claim 7, Jean does not explicitly teach and yet Urbano teaches the method of claim 1, comprising, in the event of detection of the presence of an object in said focused synthetic image, generating a composite image based on said synthetic image and said focused synthetic image [[0002] sonar; [0013] synthetic transmit focus ultrasound system; [0014] signals transmitted from each of transducer elements and received back on respective transducer elements; [0083] another method for interrogating a medium and processing the data needed to create an ultrasound image involves synthetic transmit focusing. With synthetic transmit focusing methods, each pixel of an image may be formed from data acquired by multiple transmit events from various locations of the transducers. Generally, with synthetic transmit focusing, sequentially acquired data sets may be combined to form a resultant image.].
It would have been obvious to combine target enhancement and shadow enhancement using focusing as taught by Jean, with combining of images to form a resultant image as taught by Urbano.
Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Jean (US 2009/175128 A1) and Hansen (2011, IEEE Trans. on Geoscience) as applied to claim 1 above, and further in view of Bukhari (US 2021/0349468 A1).
Regarding claim 11, Jean does not explicitly teach and yet Bukhari teaches the method as claimed in claim 10, wherein defining the region of interest of the synthetic image comprises: displaying the synthetic image on a graphical interface; a user drawing a rectangle defining the region of interest [[abstract] using a sensor on the autonomous vehicle to capture image data in a region of interest containing the element, where the image data represents components of the element; filtering the image data to produce filtered data having less of an amount of data than the image data; [0038] one or more sonar sensors; [0045] implement control functions and robot movement absent either local or remote input from a user. In some implementations, the control system may be configured to implement control functions, including localization, based at least in part on input from a user; [0053] regions of interest may be expected to contain the element with a degree of certainty. Points outside the region of interested may be discarded or not considered. For example, in FIG. 8, points within region 102 enclosed by a rectangle may be discarded because they are well outside the region of interest].
It would have been obvious to combine target enhancement and shadow enhancement using focusing as taught by Jean, with selecting an area by a user of a region of interest so that points outside a rectangle using sonar sensors maybe be discarded or considered (Bukhari) [[0053]].
Regarding claim 12, Jean does not explicitly teach and yet Bukhari teaches the method as claimed in claim 10, comprising, in the event of detection of the presence of an object in said focused synthetic image, generating a composite image based on said synthetic image and said focused synthetic image comprising displaying the focused or composite image inside said rectangle [[abstract][0038][0045][0053]].
It would have been obvious to combine target enhancement and shadow enhancement using focusing as taught by Jean, with selecting an area by a user of a region of interest so that points outside a rectangle using sonar sensors maybe be discarded or considered (Bukhari) [[0053]].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Motoyama (US 2021/0019536 A1).
Regarding claim 6, Motoyama teaches that edge detection is easily disturbed by shadows and noise [[0007]] for sonar [[0088]] and that the basis of labeling information based on estimates of distance image can be used to identify the boundary [[0009][0051]]. However, the closest prior art of record does not appear to teach the method as claimed in claim 1, comprising: generating a modified focused synthetic image by adding a shadow associated with a label to said synthetic image focused at said focusing distance; generating a modified synthetic image by applying an inverse filter of said one- dimensional filter to the modified focused synthetic image; enriching a learning base for detecting the presence of an object with the modified focused synthetic image, said focusing distance and said label.
Allowable Subject Matter
Claims 8-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 8, Buskenes teaches highlight and shadow regions in sonar [[pg. 1488, col. 2][title]] where a weight factor also enables us to emphasize either highlight or shadow more, if information (e.g., grazing angle or SNR estimates) suggests that one is more reliable than the other. [pg. 1494, col. 2]]. However, the closest prior art of record does not appear to teach the method as claimed in claim 7, wherein said generating the composite image comprises: detecting shadows in the focused synthetic image; assigning the following for each pixel of the composite image: the intensity value of the corresponding pixel of the focused synthetic image for each pixel belonging to a shadow; the intensity value of the corresponding pixel of the synthetic image otherwise.
Regarding claim 9, the closest prior art of record does not appear to teach the method as claimed in claim 7, wherein said generating the composite image comprises assigning, for each pixel of the composite image, an intensity value equal to the weighted sum of the intensity value of the corresponding pixel of the focused synthetic image and of the intensity value of the corresponding pixel of the synthetic image, where, for each pixel, the relative weight of the intensity value of the corresponding pixel of the focused synthetic image increases with an index of belonging to a shadow of the pixel.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN D ARMSTRONG whose telephone number is (571)270-7339. The examiner can normally be reached M - F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached at 571-272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN D ARMSTRONG/ Examiner, Art Unit 3645