Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 and 2 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kueny et al. (WO 2019/152572).
With respect to claim 1, Kueny et al. teach a movable body moving in the underground space (Fig. 7, 770);
a camera mounted on the movable body to photograph a first image containing at least one object placed in the underground space (Fig. 7, 771);
an object detection terminal (Fig. 7, 700); configured to receive the first image (para [0046], computer 700 to send and receive data to remote devices) and correct the first image (para [0025], processes the collected pipe inspection data to format it appropriately for AR display. By way of example, a laser scan image of the interior of the pipe may be formed into a cylindrical length of pipe) so as to detect a first object that is a preset object and is comprised in the at least one object and a second object that is different from the first object, is a preset object, and is comprised in the at least one object within a corrected second image (para [0032], The features (surface or subterranean) may be detected and identified automatically, e.g., using computer vision or pattern matching techniques to identify such features; para [0025], e.g., sediment buildup or erosion/cracks in the pipe wall); and
a communication network configured to enable network communication between the movable body and the object detection terminal (Fig. 7, 720; para [0046], computer 700 may communicate data with and between a pipe inspection robot 770).
With respect to claim 2, Kueny et al. teach each of the first object and the second object comprises at least one of person, a vehicle, or an underground facility (para [0025], e.g., sediment buildup or erosion/cracks in the pipe wall).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3-6 are rejected under 35 USC 103 as being unpatentable over Kueny et al. (WO 2019/152572) in view of Xue et al. (“Learning to Calibrate Straight Lines for Fisheye Image Rectification”, arXiv:1904.09856v2 [cs.CV] 1 May 2019).
With respect to claim 3, Kueny et al. teach all the limitations of claim 1 as applied above from which claim 4 respectively depend.
Kueny et al. do not teach expressly that the first image comprises a distorted image.
Xue et al. teaches the first image comprises a distorted image.
(Introduction, Fisheye image).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use fisheye image that have distorted in the method of Kueny et al.
The suggestion/motivation for doing so would have been that fisheye image have large field of view (FOV), however it requires correction for promising performances.
Therefore, it would have been obvious to combine Xue et al. with Kueny et al. to obtain the invention as specified in claim 3.
With respect to claim 4, Xue et al. teach the camera comprises a wide- angle camera (Introduction, Fisheye camera).
With respect to claim 5, Xue et al. teach the wide-angle camera uses at least one of a wide-angle lens or an ultra-wide-angle lens (Introduction, Fisheye camera with fisheye lens).
With respect to claim 6, Kueny et al. teach that the object detection terminal is configured to detect a plurality of objects comprising the first object and the second object (para [0025], e.g., sediment buildup or erosion/cracks in the pipe wall).
Claims 7, 9, 10, 12-14, 17 and 18 are rejected under 35 USC 103 as being unpatentable over Kueny et al. (WO 2019/152572) in view of Xue et al. (“Learning to Calibrate Straight Lines for Fisheye Image Rectification”, arXiv:1904.09856v2 [cs.CV] 1 May 2019) and in further view of 이준 et al. (KR 102924781).
With respect to claim 7, Kueny et al. teach all the limitations of claim 1 as applied above from which claim 7 respectively depend.
Kueny et al. also teach a communication unit configured to communicate with the moving object and receive the first image acquired by being captured by the camera (Fig. 7, 720; para [0046], computer 700 may communicate data with and between a pipe inspection robot 770).
Kueny et al. do not teach expressly that a convolution filter generation unit configured to generate a convolution filter that matches a distortion shape of the camera;
a main control unit configured to correct the distortion by applying the convolutional filter to the first image;
a feature extraction unit configured to calculate a feature map through a convolution operation so as to infer a region from the second image corrected by the main control unit;
an object classification module configured to receive the feature map as an input so as to classify the first object and the second object contained in the inferred region.
Xue et al. teach the first image comprises a convolution filter generation unit configured to generate a convolution filter that matches a distortion shape of the camera (Page 3, 2. General Fisheye Camera Model, distortion effect of fisheye images can be rectified once we can get the parameters; 3. Deep Calibration and Rectification Model, exploit the relationship between scene geometry of distorted lines and the corresponding distortion parameters of fisheye images by CNNs)
a main control unit configured to correct the distortion by applying the convolutional filter to the first image (Fig. 2, rectified image);
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to generate a convolution filter and correct the distortion in the method of Kueny et al.
The suggestion/motivation for doing so would have been when specified geometric objects in fisheye images can be accurately detected.
이준 et al. teach a feature extraction unit configured to calculate a feature map through a convolution operation so as to infer a region from image (Page 5, 2nd para, bounding box are inferred directly through a fully connected layer and output as output data; Fig. 6, bounding box + confidence)); an object classification module configured to receive the feature map as an input so as to classify the first object and the second object contained in the inferred region (fig. 6, final detection)
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to calculate a feature map and classify objects in the system of Kueny et al.
The suggestion/motivation for doing so would have been that use well known method to classify a plurality of objects included in the image.
Therefore, it would have been obvious to combine 이준 et al. and Xue et al. with Kueny et al. to obtain the invention as specified in claim 7.
With respect to claim 9, 이준 et al. teach the main control unit is configured to infer regions with high probability, in which at least one of the first object and the second object exists, from the feature map calculated by the feature extraction unit (Page 5, 2nd para, bounding box are inferred directly through a fully connected layer and output as output data; Fig. 6, bounding box + confidence))
With respect to claim 10, 이준 et al. teach the main control unit is configured to infer an object region by utilizing a plurality of different anchor boxes so as to detect the first object and the second object (Page 5, 2nd para, bounding box are inferred directly through a fully connected layer and output as output data; Fig. 6, bounding box + confidence))
With respect to claim 12, Xue et al. teach the convolutional filter generation unit is configured to generate a convolutional filter configured to correct distortion due to a parameter of the camera. (Page 3, Fig. 2, rectified image 3. Deep Calibration and Rectification Model)
With respect to claim 13, Xue et al. teach the parameter of the camera comprises at least one of a focal length of a lens used in the camera, an angle of view of the camera, and a relative aperture of the camera (Page 3, 2. General Fisheye Camera Model)
With respect to claim 14, Xue et al. teach the convolution filter generation unit is configured to generate a convolution filter configured to correct distortion due to a parameter of the camera, which comprises a focal length of a lens, an angle of view of the camera, and a relative aperture of the camera (Page 3, Fig. 2, rectified image 3. Deep Calibration and Rectification Model, exploit the relationship between scene geometry of distorted lines and the corresponding distortion parameters of fisheye images by CNNs).
With respect to claim 17, claim 17 is rejected same reason as claim 7 above.
With respect to claim 18, Xue et al. teach that the parameter of the camera is acquired by the object detection terminal before generating the convolutional filter (Page 3, 2. General Fisheye Camera Model, distortion effect of fisheye images can be rectified once we can get the parameters; 3. Deep Calibration and Rectification Model, exploit the relationship between scene geometry of distorted lines and the corresponding distortion parameters of fisheye images by CNNs).
Claims 8, 15, 16, 19 and 20 are rejected under 35 USC 103 as being unpatentable over Kueny et al. (WO 2019/152572) in view of Xue et al. (“Learning to Calibrate Straight Lines for Fisheye Image Rectification”, arXiv:1904.09856v2 [cs.CV] 1 May 2019) and 이준 et al. (KR 102924781) and in further view of Simplilearn “Convolutional Neural Network Tutorial (CNN) | How CNN Works | Deep Learning Tutorial | Simplilearn”, https://www.youtube.com/watch?v=Jy9-aGMB_TE&t=18s, 6/19/2018).
With respect to claim 8, Kueny et al., Xue et al. and 이준 et al. teach all the limitations of claim 7 as applied above from which claim 8 respectively depend.
The convolutional filter described in claim 8 is ordinary filter used in CNN, however, Kueny et al., Xue et al. and 이준 et al. do not teach expressly.
Simplilearn teach convolutional filter that is equivalent to claim 8.
(timeline 9:58-11:56)
PNG
media_image1.png
621
1110
media_image1.png
Greyscale
PNG
media_image2.png
597
1104
media_image2.png
Greyscale
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use convolutional filter to calculate convolved feature in the system of Kueny et al.
The suggestion/motivation for doing so would have been that use well known method to extract feature in the image.
Therefore, it would have been obvious to combine Simplilearn with 이준 et al. , Xue et al. and Kueny et al. to obtain the invention as specified in claim 8.
With respect to claim 15, claim 15 is rejected same reason as claim 8 above.
With respect to claim 16 Simplilearn teach
∆
P
n
=
∆
P
n
+
c
-
∆
P
c
(timeline 9:58-11:56, which is interpreted as next position of convolutional filter)
PNG
media_image3.png
592
1110
media_image3.png
Greyscale
With respect to claim 19, claim 19 is rejected same reason as claim 8 above.
With respect to claim 20, claim 20 is rejected same reason as claim 16 above.
Claims 11 is rejected under 35 USC 103 as being unpatentable over Kueny et al. (WO 2019/152572) in view of Xue et al. (“Learning to Calibrate Straight Lines for Fisheye Image Rectification”, arXiv:1904.09856v2 [cs.CV] 1 May 2019) and 이준 et al. (KR 102924781) and in further view of Yoo et al. (US 2020/0294257).
With respect to claim 11, Kueny et al., Xue et al. and 이준 et al. teach all the limitations of claim 10 as applied above from which claim 11 respectively depend.
Kueny et al., Xue et al. and 이준 et al. do not teach expressly RPN algorithm.
Yoo et al. teach RPN algorithm (para [0034 and 0066]).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use RPN algorithm in the system of Kueny et al., Xue et al. and 이준 et al.
The suggestion/motivation for doing so would have been that use well known method to efficiently extract feature in the image.
Therefore, it would have been obvious to combine Yoo et al. with 이준 et al. , Xue et al. and Kueny et al. to obtain the invention as specified in claim 11.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Randolph Chu whose telephone number is 571-270-1145. The examiner can normally be reached on Monday to Thursday from 7:30 am - 5 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached on (571) 272-7778.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RANDOLPH I CHU/
Primary Examiner, Art Unit 2667