Prosecution Insights
Last updated: April 19, 2026
Application No. 17/656,496

CLASSIFIER TO CLASSIFY ANOMALOUS OBJECTS EXTRACTED FROM A SENSOR DATA FIELD

Final Rejection §102§103
Filed
Mar 25, 2022
Examiner
ANSARI, TAHMINA N
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Lawrence Livermore National Security, LLC
OA Round
4 (Final)
86%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
743 granted / 868 resolved
+23.6% vs TC avg
Strong +18% interview lift
Without
With
+17.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
33 currently pending
Career history
901
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 868 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is in response to the applicant’s reply filed December 4, 2025. In the applicant’s reply; claims 11 and 14 were amended, and claims 1-5 and 18-29 were previously WITHDRAWN. No claims were cancelled. Applicant is advised that in order to advance prosecution the withdrawn claims have to be CANCELLED prior to a Notice of Allowance can be mailed. Claims 11-17 are pending in this application. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner’s Responses to Applicant’s Remark Applicants' amendments filed on December 4, 2025 have been fully considered. The amendments overcome the following rejections set forth in the office action mailed September 3, 2025. Applicant’s amendments overcome the rejections of claims 11-17 under 35 U.S.C. 1112 second paragraph for indefiniteness, and the rejection is hereby withdrawn. Applicant’s amendments overcome the rejections of Claims 14-17 under 35 U.S.C. 102(a)(1) as being anticipated by Hong et al. (US PGPub 2009/0103797 A1, hereby referred to as “Hong”), and the rejection is hereby withdrawn. Applicant’s amendments overcome the rejections of Claims 11-17 under 35 U.S.C. 103 as being unpatentable over Hong et al. (US PGPub 2009/0103797 A1, hereby referred to as “Hong”), in view of Carlson (US PGPub US 2004/0041084A1, hereby referred to as “Carlson”), and the rejection is hereby withdrawn. Applicant's arguments with respect to claims 11-17 have been considered but are moot in view of the new ground(s) of rejection, presented below, necessitated by applicant’s amendments. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 11 and 14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang et al. (US PGPub US20190234746A1 A1, hereby referred to as “Zhang”). Consider Claim 11. Zhang teaches: 11. (Original) A method performed by one or more computing systems to detect anomalous objects in a sensor data field of elements, each element having a position within the sensor data field, the method comprising: (Zhang: abstract, The present disclosure discloses a method for simultaneous localization and mapping, which can reliably handle strong rotation and fast motion. The method provided a simultaneous localization and mapping algorithm framework based on a key frame, which can support rapid local map extension. Under this framework, a new feature tracking method based on multiple homography matrices is provided, and this method is efficient and robust under strong rotation and fast motion. A camera orientation optimization framework based on a sliding window is further provided to increase motion constraint between successive frames with simulated or actual IMU data. Finally, a method for obtaining a real scale of a specific plane and scene is provided in such a manner that a virtual object is placed on a specific plane in real size. [0014]-[0041]) 11. generating a background-suppressed sensor data field with background-suppressed elements by suppressing elements that represent background using a background suppression level that is established by training classifiers based on a different background suppression level for each classifier and selecting the background suppression level based on effectiveness of the classifiers; (Zhang: [0014]-[0023], [0014] A method for simultaneous localization and mapping, includes steps of: 1) a foreground thread processing a video stream in real time, and extracting a feature point for any current frame Ii; 2) tracking a set of global homography matrices Hi G; 3) using a global homography matrix and a specific plane homography matrix to track a three-dimensional point so as to obtain a set of 3D-2D correspondences required by a camera attitude estimation; and 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) 11. for each of a plurality of windows within the background-suppressed sensor data field that are centered on a different background-suppressed element, determining whether the window includes a peak element at a peak location that satisfies a peak criterion; (Zhang: [0030]-[0031], In an embodiment, said performing extension and optimization of the local map in step 4.1) includes: for one two-dimensional feature xk of a frame Fk, calculating a ray angle according to a formula of PNG media_image1.png 411 530 media_image1.png Greyscale ) 11. and for each peak element, growing an anomalous object from the peak location of that peak element to include elements whose positions are adjacent to each other in the field and that satisfy an object criterion; (Zhang: [0032]-[0036], [0032] [0032] In an embodiment, said fixing the three-dimensional point position and optimizing all camera orientations in the local window includes: assuming that there is already a linear acceleration â and a rotation speed {circumflex over (ω)} measured in a local coordinate system, and a real linear acceleration is a=â−ba+na, a real rotation speed is ω={circumflex over (ω)}−bω+nω, na˜N(0,σn a 2I), nω˜N(0,σn ω 2I) are Gaussian noise of inertial measurement data, I is a 3×3 identity matrix, ba and bωare respectively offsets of the linear acceleration and the rotation speed with time, extending a state of a camera motion to be: s=[qT pT vT ba T bω T]T, where v is a linear velocity in a global coordinate system, a continuous-time motion equation of all states is: PNG media_image2.png 404 585 media_image2.png Greyscale ) 11. extracting a feature vector of features for the grown anomalous object including at least: positions of objects of interest, depth of object, mean of observed sensor values for sensor readings associated with the object, and object symmetry in the sensor data fields; and (Zhang: [0014]-[0023] SLAM algorithm requires the following elements PNG media_image3.png 810 784 media_image3.png Greyscale PNG media_image4.png 836 593 media_image4.png Greyscale ) 11. classifying the feature vector as representing an anomalous object of interest or an anomalous object not of interest, the classifier being the classifier associated with the selected background suppression level. (Zhang: [0014]-[0023], 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) Consider Claim 14. Zhang teaches: 14. (Original) A method performed by one or more computing systems for generating a classifier to classify anomalous objects extracted from a sensor data field as of interest or not of interest, the method comprising: (Zhang: abstract, The present disclosure discloses a method for simultaneous localization and mapping, which can reliably handle strong rotation and fast motion. The method provided a simultaneous localization and mapping algorithm framework based on a key frame, which can support rapid local map extension. Under this framework, a new feature tracking method based on multiple homography matrices is provided, and this method is efficient and robust under strong rotation and fast motion. A camera orientation optimization framework based on a sliding window is further provided to increase motion constraint between successive frames with simulated or actual IMU data. Finally, a method for obtaining a real scale of a specific plane and scene is provided in such a manner that a virtual object is placed on a specific plane in real size. [0014]-[0041]) 14. for each of a plurality of different background suppression levels, training an object classifier using training data extracted from background-suppressed sensor data fields based on that background suppression level, (Zhang: [0014]-[0023] 0014] A method for simultaneous localization and mapping, includes steps of: 1) a foreground thread processing a video stream in real time, and extracting a feature point for any current frame Ii; 2) tracking a set of global homography matrices Hi G; 3) using a global homography matrix and a specific plane homography matrix to track a three-dimensional point so as to obtain a set of 3D-2D correspondences required by a camera attitude estimation; and 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) the training data including feature vectors for anomalous objects labeled as of interest or not of interest based on prior knowledge of features including at least: positions of objects of interest, depth of object, mean of observed sensor values for sensor readings associated with the object, and object symmetry in the sensor data fields in the sensor data fields; (Zhang: [0014]-[0023] SLAM algorithm requires the following elements PNG media_image3.png 810 784 media_image3.png Greyscale PNG media_image4.png 836 593 media_image4.png Greyscale ) 14. and selecting one of the object classifiers associated with a background suppression level based on effectiveness of classification. (Zhang: [0014]-[0023], 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 11-17 are rejected under 35 U.S.C. 103 as being unpatentable over Hong et al. (US PGPub 2009/0103797 A1, hereby referred to as “Hong”), in view of Zhang et al. (US PGPub US20190234746A1 A1, hereby referred to as “Zhang”). Consider Claims 11. Hong teaches: 11. (Original) A method performed by one or more computing systems to detect anomalous objects in a sensor data field of elements, each element having a position within the sensor data field, the method comprising: (Hong: abstract, A method and system for nodule feature extract using background contextual information in chest x-ray images is disclosed. In order to detect false positives in nodule candidates for a chest x-ray image, background contextual information, such as contextual vessel tree information, is defined in the chest x-ray image. Features are extracted for each nodule candidate based on the background contextual information, and the extracted features are used to detect whether each nodule candidate is a false positive or a genuine nodule. [0022] FIG. 1 illustrates nodule-like vessel structures in chest x-ray images. As illustrated in FIG. 1, images 102 and 104 are chest x-ray images with nodule-like vessel tree structures. In images 102 and 104, a number of vessel tree clusters 106 and 108 at the regions near the mediastinum appear more like genuine nodules than actual genuine nodules using conventional nodule features. Accordingly, the conventional nodule features are not capable of identifying such vessel tree clusters as false positives. However, the vessel tree structures nearby clearly provide useful indication that these are not genuine nodules. [0023] FIG. 2 illustrates a method for nodule detection and nodule feature extraction using background contextual information according to an embodiment of the present invention. As illustrated in FIG. 2, at step 202 a chest x-ray image is received. The image can be received directly from an image acquisition device, such as an x-ray imaging device. Alternatively, the image can be received by loading an image, stored on a computer readable medium, or memory or storage of a computer system. [0024] At step 204, nodule candidates are detected in the chest x-ray image. For example, nodule candidates can be detected by a computer aided automatic nodule detection method. For example, any well-known automatic nodule detection method can be utilized to detect nodule candidates in the chest x-ray image. [0041], Figure 8) 11. generating a background-suppressed sensor data field with background-suppressed elements by suppressing elements that represent background using a background suppression level that is established by training classifiers based on a different background suppression level for each classifier and selecting the background suppression level based on effectiveness of the classifiers; (Hong: [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. First, a multiscale filtering stage is performed, in which a number of filtered images are generated using filters that are tuned to nodules in a certain range. Next, a nodule candidate detection stage is performed, in which a local peak detection algorithm using multiple thresholding based shape analysis is applied to each of the filtered images. Then, a fusion stage is performed, in which detection results from different filtered images are fused together to produce the final detection result. The final detection result gives points and the estimated size in the chest x-ray image that are nodule candidates. [0025] At step 206, background contextual information is defined in the chest x-ray image. For the nodule feature extraction, background contextual information refers to prominent background structures inside lung regions that complicate the detection of genuine nodules. Ideally, if the background structures are well defined, then a precise segmentation of the background structures may be obtained, which forms a valid representation of the background contextual information. However, contextual background structures in a chest x-ray images may not always be well defined. In this case, the background contextual information may be defined by a pseudo-segmentation of the background structures with a concentration focused on labeling significant intensity and/or structure abnormalities.) 11. for each of a plurality of windows within the background-suppressed sensor data field that are centered on a different background-suppressed element, determining whether the window includes a peak element at a peak location that satisfies a peak criterion; (Hong: [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. First, a multiscale filtering stage is performed, in which a number of filtered images are generated using filters that are tuned to nodules in a certain range. Next, a nodule candidate detection stage is performed, in which a local peak detection algorithm using multiple thresholding based shape analysis is applied to each of the filtered images. Then, a fusion stage is performed, in which detection results from different filtered images are fused together to produce the final detection result. The final detection result gives points and the estimated size in the chest x-ray image that are nodule candidates. [0025] At step 206, background contextual information is defined in the chest x-ray image. For the nodule feature extraction, background contextual information refers to prominent background structures inside lung regions that complicate the detection of genuine nodules. 11. and for each peak element, growing an anomalous object from the peak location of that peak element to include elements whose positions are adjacent to each other in the field and that satisfy an object criterion; (Hong: [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. First, a multiscale filtering stage is performed, in which a number of filtered images are generated using filters that are tuned to nodules in a certain range. Next, a nodule candidate detection stage is performed, in which a local peak detection algorithm using multiple thresholding based shape analysis is applied to each of the filtered images. Then, a fusion stage is performed, in which detection results from different filtered images are fused together to produce the final detection result. The final detection result gives points and the estimated size in the chest x-ray image that are nodule candidates. [0025] At step 206, background contextual information is defined in the chest x-ray image. For the nodule feature extraction, background contextual information refers to prominent background structures inside lung regions that complicate the detection of genuine nodules. [0026]- [0027] FIG. 3 illustrates a method of vessel tree propagation for defining background contextual information according to an embodiment of the present invention. The method of FIG. 3 uses a controlled marching process operating on a combination of the raw chest x-ray image and a gradient image of the chest x-ray image. The method of FIG. 3, assumes that both lung lobes are pre-segmented and labeled. The lungs can be segmented using any well-known lung segmentation algorithm. As illustrated in FIG. 3, at step 302, initial vessel tree templates are generated. The method starts from a pair of initial template shapes that are placed at the low inside boundaries of the segmented lung lobes. The initial vessel tree template shapes are a pair of binary blocks with a predefined shape that are refined and placed at the low inner boundaries of segmented lung masks. [0030]-[0037]) 11. extracting a feature vector of features for the grown anomalous object; (Hong: [0023] Embodiments of the present invention are directed to an extraction technique that isolates prominent background structures in a chest x-ray image before feature extraction, and calculates features under the context of a global view of the isolated contextual background structures. FIG. 2 illustrates a method for nodule detection and nodule feature extraction using background contextual information according to an embodiment of the present invention. As illustrated in FIG. 2, at step 202 a chest x-ray image is received. [0028] Returning to FIG. 3, at step 304, the vessel tree region front is propagated. From the initial vessel tree templates, vessel tree regions are labeled progressively using a front propagation algorithm at multiple confidence intervals. The motivation behind the idea of front propagation is to control the formation of vessel trees and to generate contextual information for the later feature extraction process, instead of a plain pixel classification using intensity and/or texture features. In the front propagation process, only pixels that are within a small neighborhood of the previously propagated vessel tree regions are evaluated for possible propagation. [0032] According to an embodiment of the present invention, a set of four features are derived by analyzing the properties of the region enclosed by vessel trees and covered by the extended region of interest of a nodule candidate. In order to extract these features for each nodule candidate, a region of interest is estimated for each candidate. The region of interest for each candidate is a circular region approximately covering the candidate. As described above, the size and location of the candidate is estimated in the candidate generation algorithm. The extended covering region of interest for each candidate is defined as the circular region that is an expansion of the original circular region of interest to twice the size of the original region of interest. Within the defined region of interest, the following four features (i.e., first, second, third, and fourth features, respectively) are calculated: [0033]-[0036] 1. a regular shape feature of an empty region at the highest possible confidence level where the candidate point is in the background region; 2. the average propagation distance of vessel tree pixels next to the empty region at the highest possible confidence level; 3. the relative size of the empty region with respect to the covering region of interest at the possible highest confidence level; and 4. the weighted sum of confidence levels of boundary pixels in vessel tree regions that are next to the empty region.) 11. extracting a feature vector of features for the grown anomalous object including at least: positions of objects of interest, depth of object, mean of observed sensor values for sensor readings associated with the object, object symmetry in the sensor data fields; (Hong: [0032] According to an embodiment of the present invention, a set of four features are derived by analyzing the properties of the region enclosed by vessel trees and covered by the extended region of interest of a nodule candidate. In order to extract these features for each nodule candidate, a region of interest is estimated for each candidate. The region of interest for each candidate is a circular region approximately covering the candidate. As described above, the size and location of the candidate is estimated in the candidate generation algorithm. The extended covering region of interest for each candidate is defined as the circular region that is an expansion of the original circular region of interest to twice the size of the original region of interest. Within the defined region of interest, the following four features (i.e., first, second, third, and fourth features, respectively) are calculated: [0033]-[0036] 1. a regular shape feature of an empty region at the highest possible confidence level where the candidate point is in the background region; 2. the average propagation distance of vessel tree pixels next to the empty region at the highest possible confidence level; 3. the relative size of the empty region with respect to the covering region of interest at the possible highest confidence level; and 4. the weighted sum of confidence levels of boundary pixels in vessel tree regions that are next to the empty region.)) 11. and classifying the feature vector as representing an anomalous object of interest or an anomalous object not of interest, the classifier being the classifier associated with the selected background suppression level. (Hong: [0007] [0032] According to an embodiment of the present invention, a set of four features are derived by analyzing the properties of the region enclosed by vessel trees and covered by the extended region of interest of a nodule candidate. In order to extract these features for each nodule candidate, a region of interest is estimated for each candidate. The region of interest for each candidate is a circular region approximately covering the candidate. As described above, the size and location of the candidate is estimated in the candidate generation algorithm. The extended covering region of interest for each candidate is defined as the circular region that is an expansion of the original circular region of interest to twice the size of the original region of interest. Within the defined region of interest, the following four features (i.e., first, second, third, and fourth features, respectively) are calculated [0033]-[0036], [0037] Note that even though the first, second, third, and fourth features are computed within a local region of interest, they actually represent the relationships between regions of interest and overall vessel trees, which are of global in nature. Although, four features are described herein, the present invention is not limited thereto. For example, more subtle relationships between the vessel tree regions and nodules can be derived and extracted as features. [0038] At step 210, false positive nodule candidates and genuine nodules are detected based on the extracted features. For example, each of the features extracted for each candidate nodule based on the background contextual information, such as the first, second, third, and fourth features described above, can be compared to a corresponding threshold in order to determine whether each nodule candidate is a false positive or a genuine nodule. This detection of false positives and genuine nodules can confirm the presence of actual nodules, while eliminating false positives erroneously detected using an automatic nodule detection algorithm. These features can also be used as inputs to other classification schemes to differentiate genuine nodules from false positives. For example, these features can be used to train a learning base classifier to differentiate genuine nodules from false positives. [0039] FIG. 7) Even if Hong does not teach “a window includes a peak element at a peak location that satisfies a peak criterion” or “extracting a feature vector of features for the grown anomalous object including at least: depth of object, mean of observed sensor values for sensor readings associated with the object, object symmetry in the sensor data fields;” Zhang teaches: 11. (Original) A method performed by one or more computing systems to detect anomalous objects in a sensor data field of elements, each element having a position within the sensor data field, the method comprising: (Zhang: abstract, The present disclosure discloses a method for simultaneous localization and mapping, which can reliably handle strong rotation and fast motion. The method provided a simultaneous localization and mapping algorithm framework based on a key frame, which can support rapid local map extension. Under this framework, a new feature tracking method based on multiple homography matrices is provided, and this method is efficient and robust under strong rotation and fast motion. A camera orientation optimization framework based on a sliding window is further provided to increase motion constraint between successive frames with simulated or actual IMU data. Finally, a method for obtaining a real scale of a specific plane and scene is provided in such a manner that a virtual object is placed on a specific plane in real size. [0014]-[0041]) 11. generating a background-suppressed sensor data field with background-suppressed elements by suppressing elements that represent background using a background suppression level that is established by training classifiers based on a different background suppression level for each classifier and selecting the background suppression level based on effectiveness of the classifiers; (Zhang: [0014]-[0023] 0014] A method for simultaneous localization and mapping, includes steps of: 1) a foreground thread processing a video stream in real time, and extracting a feature point for any current frame Ii; 2) tracking a set of global homography matrices Hi G; 3) using a global homography matrix and a specific plane homography matrix to track a three-dimensional point so as to obtain a set of 3D-2D correspondences required by a camera attitude estimation; and 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) 11. for each of a plurality of windows within the background-suppressed sensor data field that are centered on a different background-suppressed element, determining whether the window includes a peak element at a peak location that satisfies a peak criterion; (Zhang: [0030]-[0031], In an embodiment, said performing extension and optimization of the local map in step 4.1) includes: for one two-dimensional feature xk of a frame Fk, calculating a ray angle according to a formula of PNG media_image1.png 411 530 media_image1.png Greyscale ) 11. and for each peak element, growing an anomalous object from the peak location of that peak element to include elements whose positions are adjacent to each other in the field and that satisfy an object criterion; (Zhang: [0032]-[0036], [0032] [0032] In an embodiment, said fixing the three-dimensional point position and optimizing all camera orientations in the local window includes: assuming that there is already a linear acceleration â and a rotation speed {circumflex over (ω)} measured in a local coordinate system, and a real linear acceleration is a=â−ba+na, a real rotation speed is ω={circumflex over (ω)}−bω+nω, na˜N(0,σn a 2I), nω˜N(0,σn ω 2I) are Gaussian noise of inertial measurement data, I is a 3×3 identity matrix, ba and bωare respectively offsets of the linear acceleration and the rotation speed with time, extending a state of a camera motion to be: s=[qT pT vT ba T bω T]T, where v is a linear velocity in a global coordinate system, a continuous-time motion equation of all states is: PNG media_image2.png 404 585 media_image2.png Greyscale ) 11. extracting a feature vector of features for the grown anomalous object including at least: positions of objects of interest, depth of object, mean of observed sensor values for sensor readings associated with the object, and object symmetry in the sensor data fields; and (Zhang: [0014]-[0023] SLAM algorithm requires the following elements PNG media_image3.png 810 784 media_image3.png Greyscale PNG media_image4.png 836 593 media_image4.png Greyscale ) 11. classifying the feature vector as representing an anomalous object of interest or an anomalous object not of interest, the classifier being the classifier associated with the selected background suppression level. (Zhang: [0014]-[0023], 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) It would have been obvious before the effective filing date of the claimed invention was filed to one of ordinary skill in the art to modify Hong’s learning model method and system for feature extraction using background contextual information to leverage an algorithm for simultaneous localization and mapping as proposed by Zhang because they are both directed to image processing and analysis. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to modify Hong in order to use Zhang’s algorithm for feature tracking based on multiple homography matrices that takes into account increased motion constraints and to refine the type of background that can be contextually used in the overall feature extraction process. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and programming techniques, without changing a “fundamental” operating principle of Hong, while the teaching of Zhang continues to perform the same function as originally taught prior to being combined for feature tracking based on multiple homography matrices that takes into account increased motion constraints, and refines the type of background that can be extracted, in order to produce the repeatable and predictable result of ensuring that the learning operation leverages a more reflective background that can be separated or learned more effectively in the learning model. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Consider Claim 12. The combination of Hong and Zhang teaches: 12. (Original) The method of claim 11 wherein an element is background suppressed by multiplying by an attenuation coefficient derived from a candidate attenuation coefficient associated with neighborhoods of elements surrounding the element. (Hong: [0028] Returning to FIG. 3, at step 304, the vessel tree region front is propagated. From the initial vessel tree templates, vessel tree regions are labeled progressively using a front propagation algorithm at multiple confidence intervals. The motivation behind the idea of front propagation is to control the formation of vessel trees and to generate contextual information for the later feature extraction process, instead of a plain pixel classification using intensity and/or texture features. In the front propagation process, only pixels that are within a small neighborhood of the previously propagated vessel tree regions are evaluated for possible propagation. If a pixel satisfies the confidence level constraint, the pixel is propagated. Let I(x,y) be the raw intensity image, G(x,y) be the gradient image, the confidence value C(x,y) at a pixel (x,y) is computed as: PNG media_image5.png 86 970 media_image5.png Greyscale where K(x,y) represents the accumulated distance at (x,y) along the trace of front propagation, a<1 and b<1 are weighting parameters which are set to 0.4 and 0.2, respectively. Multiple levels of confidence are pre-established as control thresholds. At each level of control thresholds, the fronts of current vessel tree regions are propagated as long as there are a sufficient number of front pixels with their confidence value larger than the control threshold. FIG. 5 illustrates pseudo-code 500 for implementation the front propagation algorithm according to an embodiment of the present invention. As shown the pseudo-code 500, four confidence levels can be used for propagation of the vessel tree region front in an advantageous implementation. [0030] Returning to FIG. 2, at step 208, features are extracted for the nodule candidates based on the background contextual information. In conventional nodule feature extraction algorithms described above, a nodule is either explicitly or implicitly modeled as an overlay of two separate intensity formations: (1) the round shaped nodule blob and (2) the lung background which may contain different structures. Except for a few large and/or dense nodules, genuine nodules typically exhibit as weak round shaped blobs added to the underlying background structures. Feature computation within a limited region of interest of a nodule, such as the peak support region used by the matching filtering based techniques and the exploration ring range used by the adaptive ring filtering based techniques, is not able to reveal the subtle difference between genuine nodules and false positive background structures in image properties. As observed by the present inventors, it is actually the relationship between a nodule and the underlying background structures (e.g., vessel trees) that is effective in differentiating a false positive from a genuine nodule. Therefore, under background vessel tree context, the nodule feature extraction method focuses on assessing the relationship of a susceptible nodule with the labeled vessel tree structures, according to an embodiment of the present invention.) Consider Claim 13. The combination of Hong and Zhang teaches: 13. (Original) The method of claim 11 further comprising for each of a plurality of different background suppression levels: for each of a plurality of sensor data fields used for training, performing background suppression of the elements in that sensor data field based on that background suppression level; (Hong: [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. First, a multiscale filtering stage is performed, in which a number of filtered images are generated using filters that are tuned to nodules in a certain range. Next, a nodule candidate detection stage is performed, in which a local peak detection algorithm using multiple thresholding based shape analysis is applied to each of the filtered images. Then, a fusion stage is performed, in which detection results from different filtered images are fused together to produce the final detection result. The final detection result gives points and the estimated size in the chest x-ray image that are nodule candidates. [0025] At step 206, background contextual information is defined in the chest x-ray image. For the nodule feature extraction, background contextual information refers to prominent background structures inside lung regions that complicate the detection of genuine nodules. Ideally, if the background structures are well defined, then a precise segmentation of the background structures may be obtained, which forms a valid representation of the background contextual information. However, contextual background structures in a chest x-ray images may not always be well defined. In this case, the background contextual information may be defined by a pseudo-segmentation of the background structures with a concentration focused on labeling significant intensity and/or structure abnormalities.) extracting peaks in the background-suppressed sensor data field; and growing anomalous objects in that sensor data field from peaks in the background- suppressed sensor data field; Hong: [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. First, a multiscale filtering stage is performed, in which a number of filtered images are generated using filters that are tuned to nodules in a certain range. Next, a nodule candidate detection stage is performed, in which a local peak detection algorithm using multiple thresholding based shape analysis is applied to each of the filtered images. Then, a fusion stage is performed, in which detection results from different filtered images are fused together to produce the final detection result. The final detection result gives points and the estimated size in the chest x-ray image that are nodule candidates. [0025] At step 206, background contextual information is defined in the chest x-ray image. For the nodule feature extraction, background contextual information refers to prominent background structures inside lung regions that complicate the detection of genuine nodules. extracting a feature vector for each grown anomalous object; (Hong: [0023] Embodiments of the present invention are directed to an extraction technique that isolates prominent background structures in a chest x-ray image before feature extraction, and calculates features under the context of a global view of the isolated contextual background structures. FIG. 2 illustrates a method for nodule detection and nodule feature extraction using background contextual information according to an embodiment of the present invention. As illustrated in FIG. 2, at step 202 a chest x-ray image is received. [0028] Returning to FIG. 3, at step 304, the vessel tree region front is propagated. From the initial vessel tree templates, vessel tree regions are labeled progressively using a front propagation algorithm at multiple confidence intervals. The motivation behind the idea of front propagation is to control the formation of vessel trees and to generate contextual information for the later feature extraction process, instead of a plain pixel classification using intensity and/or texture features. In the front propagation process, only pixels that are within a small neighborhood of the previously propagated vessel tree regions are evaluated for possible propagation.) and assigning a class label of interest or not of interest to each grown anomalous object based on prior knowledge of objects of interest within that sensor data field; and training an object classifier using feature vectors and the class labels. (Hong: [0028] Returning to FIG. 3, at step 304, the vessel tree region front is propagated. From the initial vessel tree templates, vessel tree regions are labeled progressively using a front propagation algorithm at multiple confidence intervals. The motivation behind the idea of front propagation is to control the formation of vessel trees and to generate contextual information for the later feature extraction process, instead of a plain pixel classification using intensity and/or texture features. In the front propagation process, only pixels that are within a small neighborhood of the previously propagated vessel tree regions are evaluated for possible propagation. If a pixel satisfies the confidence level constraint, the pixel is propagated. Let I(x,y) be the raw intensity image, G(x,y) be the gradient image, the confidence value C(x,y) at a pixel (x,y) is computed as: PNG media_image5.png 86 970 media_image5.png Greyscale where K(x,y) represents the accumulated distance at (x,y) along the trace of front propagation, a<1 and b<1 are weighting parameters which are set to 0.4 and 0.2, respectively. Multiple levels of confidence are pre-established as control thresholds. At each level of control thresholds, the fronts of current vessel tree regions are propagated as long as there are a sufficient number of front pixels with their confidence value larger than the control threshold. FIG. 5 illustrates pseudo-code 500 for implementation the front propagation algorithm according to an embodiment of the present invention. As shown the pseudo-code 500, four confidence levels can be used for propagation of the vessel tree region front in an advantageous implementation. [0030] Returning to FIG. 2, at step 208, features are extracted for the nodule candidates based on the background contextual information. In conventional nodule feature extraction algorithms described above, a nodule is either explicitly or implicitly modeled as an overlay of two separate intensity formations: (1) the round shaped nodule blob and (2) the lung background which may contain different structures. Except for a few large and/or dense nodules, genuine nodules typically exhibit as weak round shaped blobs added to the underlying background structures. Feature computation within a limited region of interest of a nodule, such as the peak support region used by the matching filtering based techniques and the exploration ring range used by the adaptive ring filtering based techniques, is not able to reveal the subtle difference between genuine nodules and false positive background structures in image properties. As observed by the present inventors, it is actually the relationship between a nodule and the underlying background structures (e.g., vessel trees) that is effective in differentiating a false positive from a genuine nodule. Therefore, under background vessel tree context, the nodule feature extraction method focuses on assessing the relationship of a susceptible nodule with the labeled vessel tree structures, according to an embodiment of the present invention.) Consider Claim 14. Hong teaches: 14. (Original) A method performed by one or more computing systems for generating a classifier to classify anomalous objects extracted from a sensor data field as of interest or not of interest, the method comprising: (Hong: abstract, A method and system for nodule feature extract using background contextual information in chest x-ray images is disclosed. In order to detect false positives in nodule candidates for a chest x-ray image, background contextual information, such as contextual vessel tree information, is defined in the chest x-ray image. Features are extracted for each nodule candidate based on the background contextual information, and the extracted features are used to detect whether each nodule candidate is a false positive or a genuine nodule. [0022] FIG. 1 illustrates nodule-like vessel structures in chest x-ray images. As illustrated in FIG. 1, images 102 and 104 are chest x-ray images with nodule-like vessel tree structures. In images 102 and 104, a number of vessel tree clusters 106 and 108 at the regions near the mediastinum appear more like genuine nodules than actual genuine nodules using conventional nodule features. Accordingly, the conventional nodule features are not capable of identifying such vessel tree clusters as false positives. However, the vessel tree structures nearby clearly provide useful indication that these are not genuine nodules. [0023] FIG. 2 illustrates a method for nodule detection and nodule feature extraction using background contextual information according to an embodiment of the present invention. As illustrated in FIG. 2, at step 202 a chest x-ray image is received. The image can be received directly from an image acquisition device, such as an x-ray imaging device. Alternatively, the image can be received by loading an image, stored on a computer readable medium, or memory or storage of a computer system. [0024] At step 204, nodule candidates are detected in the chest x-ray image. For example, nodule candidates can be detected by a computer aided automatic nodule detection method. For example, any well-known automatic nodule detection method can be utilized to detect nodule candidates in the chest x-ray image. [0041], Figure 8) 14. for each of a plurality of different background suppression levels, training an object classifier using training data extracted from background-suppressed sensor data fields based on that background suppression level, (Hong: [0023] Embodiments of the present invention are directed to an extraction technique that isolates prominent background structures in a chest x-ray image before feature extraction, and calculates features under the context of a global view of the isolated contextual background structures. FIG. 2 illustrates a method for nodule detection and nodule feature extraction using background contextual information according to an embodiment of the present invention. As illustrated in FIG. 2, at step 202 a chest x-ray image is received. [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. First, a multiscale filtering stage is performed, in which a number of filtered images are generated using filters that are tuned to nodules in a certain range. Next, a nodule candidate detection stage is performed, in which a local peak detection algorithm using multiple thresholding based shape analysis is applied to each of the filtered images. Then, a fusion stage is performed, in which detection results from different filtered images are fused together to produce the final detection result. The final detection result gives points and the estimated size in the chest x-ray image that are nodule candidates. [0025] At step 206, background contextual information is defined in the chest x-ray image. For the nodule feature extraction, background contextual information refers to prominent background structures inside lung regions that complicate the detection of genuine nodules. Ideally, if the background structures are well defined, then a precise segmentation of the background structures may be obtained, which forms a valid representation of the background contextual information. However, contextual background structures in a chest x-ray images may not always be well defined. In this case, the background contextual information may be defined by a pseudo-segmentation of the background structures with a concentration focused on labeling significant intensity and/or structure abnormalities. [0026] According to an embodiment of the present invention, the vessel tree can be the prominent background structure in the chest x-ray image that affects accuracy of nodule detection. Accordingly, the representation of the vessel tree can be used to define the background contextual information of the chest x-ray image. Vessel trees in a chest x-ray image are a 2D projection of 3D vessels into the image plane. Vessel trees generally form clusters of high intensity regions near the low inside boundary of both lung lobes. The vessel trees are highly irregular and non-uniform, and there may be no clearly defined boundary. The intensity becomes weaker as the vessels trees extend to outer regions of the lung lobes. For these reasons, it can be very difficult to form a precise segmentation of the vessel trees. In order to define the background contextual information, a pseudo-segmentation of the vessel regions in the chest x-ray image is generated. The representation of the vessel trees is established using a multi-level representation schema with decreasing confidence levels. At the highest confidence level, the vessel tree is represented by a pair of predefined templates as starting shapes. With the progressive decrease of the confidence levels, additional vessel tree clusters are propagated and merged with the vessel tree regions that are already propagated. The propagated vessel tree regions form a global background context, which provides important clues to differentiate between nodule-like vessel tree structures and genuine nodules. [0027]-[0030], Figure 3) 14. the training data including feature vectors for anomalous objects labeled as of interest or not of interest based on prior knowledge of features including at least: positions of objects of interest, depth of object, mean of observed sensor values for sensor readings associated with the object, object symmetry in the sensor data fields in the sensor data fields; (Hong: [0032] According to an embodiment of the present invention, a set of four features are derived by analyzing the properties of the region enclosed by vessel trees and covered by the extended region of interest of a nodule candidate. In order to extract these features for each nodule candidate, a region of interest is estimated for each candidate. The region of interest for each candidate is a circular region approximately covering the candidate. As described above, the size and location of the candidate is estimated in the candidate generation algorithm. The extended covering region of interest for each candidate is defined as the circular region that is an expansion of the original circular region of interest to twice the size of the original region of interest. Within the defined region of interest, the following four features (i.e., first, second, third, and fourth features, respectively) are calculated: [0033]-[0036] 1. a regular shape feature of an empty region at the highest possible confidence level where the candidate point is in the background region; 2. the average propagation distance of vessel tree pixels next to the empty region at the highest possible confidence level; 3. the relative size of the empty region with respect to the covering region of interest at the possible highest confidence level; and 4. the weighted sum of confidence levels of boundary pixels in vessel tree regions that are next to the empty region.)) (Hong: [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. [0027]-[0030], Figure 3, [0028] Returning to FIG. 3, at step 304, the vessel tree region front is propagated. From the initial vessel tree templates, vessel tree regions are labeled progressively using a front propagation algorithm at multiple confidence intervals. The motivation behind the idea of front propagation is to control the formation of vessel trees and to generate contextual information for the later feature extraction process, instead of a plain pixel classification using intensity and/or texture features. In the front propagation process, only pixels that are within a small neighborhood of the previously propagated vessel tree regions are evaluated for possible propagation. [0032] According to an embodiment of the present invention, a set of four features are derived by analyzing the properties of the region enclosed by vessel trees and covered by the extended region of interest of a nodule candidate. In order to extract these features for each nodule candidate, a region of interest is estimated for each candidate. The region of interest for each candidate is a circular region approximately covering the candidate. As described above, the size and location of the candidate is estimated in the candidate generation algorithm. The extended covering region of interest for each candidate is defined as the circular region that is an expansion of the original circular region of interest to twice the size of the original region of interest. Within the defined region of interest, the following four features (i.e., first, second, third, and fourth features, respectively) are calculated: [0033]-[0036] 1. a regular shape feature of an empty region at the highest possible confidence level where the candidate point is in the background region; 2. the average propagation distance of vessel tree pixels next to the empty region at the highest possible confidence level; 3. the relative size of the empty region with respect to the covering region of interest at the possible highest confidence level; and 4. the weighted sum of confidence levels of boundary pixels in vessel tree regions that are next to the empty region.) 14. and selecting one of the object classifiers associated with a background suppression level based on effectiveness of classification. (Hong: [0007], [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. [0032] According to an embodiment of the present invention, a set of four features are derived by analyzing the properties of the region enclosed by vessel trees and covered by the extended region of interest of a nodule candidate. In order to extract these features for each nodule candidate, a region of interest is estimated for each candidate. The region of interest for each candidate is a circular region approximately covering the candidate. As described above, the size and location of the candidate is estimated in the candidate generation algorithm. The extended covering region of interest for each candidate is defined as the circular region that is an expansion of the original circular region of interest to twice the size of the original region of interest. Within the defined region of interest, the following four features (i.e., first, second, third, and fourth features, respectively) are calculated: [0033]-[0036] 1. a regular shape feature of an empty region at the highest possible confidence level where the candidate point is in the background region; 2. the average propagation distance of vessel tree pixels next to the empty region at the highest possible confidence level; 3. the relative size of the empty region with respect to the covering region of interest at the possible highest confidence level; and 4. the weighted sum of confidence levels of boundary pixels in vessel tree regions that are next to the empty region. [0037] Note that even though the first, second, third, and fourth features are computed within a local region of interest, they actually represent the relationships between regions of interest and overall vessel trees, which are of global in nature. Although, four features are described herein, the present invention is not limited thereto. For example, more subtle relationships between the vessel tree regions and nodules can be derived and extracted as features. [0038] At step 210, false positive nodule candidates and genuine nodules are detected based on the extracted features. For example, each of the features extracted for each candidate nodule based on the background contextual information, such as the first, second, third, and fourth features described above, can be compared to a corresponding threshold in order to determine whether each nodule candidate is a false positive or a genuine nodule. This detection of false positives and genuine nodules can confirm the presence of actual nodules, while eliminating false positives erroneously detected using an automatic nodule detection algorithm. These features can also be used as inputs to other classification schemes to differentiate genuine nodules from false positives. For example, these features can be used to train a learning base classifier to differentiate genuine nodules from false positives. [0039] FIG. 7) Even if Hong does not teach: the training data including feature vectors for anomalous objects labeled as of interest or not of interest based on prior knowledge of features including at least: depth of object, mean of observed sensor values for sensor readings associated with the object, and object symmetry in the sensor data fields in the sensor data fields; Zhang teaches: 14. (Original) A method performed by one or more computing systems for generating a classifier to classify anomalous objects extracted from a sensor data field as of interest or not of interest, the method comprising: (Zhang: abstract, The present disclosure discloses a method for simultaneous localization and mapping, which can reliably handle strong rotation and fast motion. The method provided a simultaneous localization and mapping algorithm framework based on a key frame, which can support rapid local map extension. Under this framework, a new feature tracking method based on multiple homography matrices is provided, and this method is efficient and robust under strong rotation and fast motion. A camera orientation optimization framework based on a sliding window is further provided to increase motion constraint between successive frames with simulated or actual IMU data. Finally, a method for obtaining a real scale of a specific plane and scene is provided in such a manner that a virtual object is placed on a specific plane in real size. [0014]-[0041]) 14. for each of a plurality of different background suppression levels, training an object classifier using training data extracted from background-suppressed sensor data fields based on that background suppression level, (Zhang: [0014]-[0023], [0014] A method for simultaneous localization and mapping, includes steps of: 1) a foreground thread processing a video stream in real time, and extracting a feature point for any current frame Ii; 2) tracking a set of global homography matrices Hi G; 3) using a global homography matrix and a specific plane homography matrix to track a three-dimensional point so as to obtain a set of 3D-2D correspondences required by a camera attitude estimation; and 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) the training data including feature vectors for anomalous objects labeled as of interest or not of interest based on prior knowledge of features including at least: positions of objects of interest, depth of object, mean of observed sensor values for sensor readings associated with the object, and object symmetry in the sensor data fields in the sensor data fields; (Zhang: [0014]-[0023] SLAM algorithm requires the following elements PNG media_image3.png 810 784 media_image3.png Greyscale PNG media_image4.png 836 593 media_image4.png Greyscale ) 14. and selecting one of the object classifiers associated with a background suppression level based on effectiveness of classification. (Zhang: [0014]-[0023], 4) evaluating quality of the tracking according to a number of the tracked feature points and classifying the quality into good, medium and poor; 4.1) when the quality of the tracking is good, performing extension and optimization of a local map, and then determining whether to select a new key frame; 4.2) when the quality of the tracking is medium, estimating a set of local homography matrices Hk→i L, and re-matching a feature that has failed to be tracked; and 4.3) when the quality of the tracking is poor, triggering a relocating program, and once the relocating is successful, performing a global homography matrix tracking using a key frame that is relocated, and then performing tracking of features again, wherein in step 4.1), when it is determined to select a new key frame, the selected new key frame wakes up a background thread for global optimization, the new key frame and a new triangulated three-dimensional point is added for extending a global map, and a local bundle adjustment is adopted for optimization; then an existing three-dimensional plane is extended, the added new three-dimensional point is given to the existing plane, or a new three-dimensional plane is extracted from the added new three-dimensional point; subsequently, a loop is detected by matching the new key frame with an existing key frame; and finally, a global bundle adjustment is performed on the global map.) It would have been obvious before the effective filing date of the claimed invention was filed to one of ordinary skill in the art to modify Hong’s learning model method and system for feature extraction using background contextual information to leverage an algorithm for simultaneous localization and mapping as proposed by Zhang because they are both directed to image processing and analysis. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to modify Hong in order to use Zhang’s algorithm for feature tracking based on multiple homography matrices that takes into account increased motion constraints and to refine the type of background that can be contextually used in the overall feature extraction process. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and programming techniques, without changing a “fundamental” operating principle of Hong, while the teaching of Zhang continues to perform the same function as originally taught prior to being combined for feature tracking based on multiple homography matrices that takes into account increased motion constraints, and refines the type of background that can be extracted, in order to produce the repeatable and predictable result of ensuring that the learning operation leverages a more reflective background that can be separated or learned more effectively in the learning model. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Consider Claim 15. The combination of Hong and Zhang teaches: 15. The method of claim 14 further comprising for each background suppression level: for each sensor data field, identifying peak elements in the background-suppressed sensor data field that satisfy a peak criterion; and for each peak element within the background-suppressed sensor data field, growing an anomalous object in the sensor data field from the peak element to include elements that are connected to each other in the sensor data field and satisfy an anomalous object criterion; (Hong: [0024] The multi-filter based nodule candidate detection method includes of a number of relatively independent processing stages. First, a multiscale filtering stage is performed, in which a number of filtered images are generated using filters that are tuned to nodules in a certain range. Next, a nodule candidate detection stage is performed, in which a local peak detection algorithm using multiple thresholding based shape analysis is applied to each of the filtered images. Then, a fusion stage is performed, in which detection results from different filtered images are fused together to produce the final detection result. The final detection result gives points and the estimated size in the chest x-ray image that are nodule candidates. [0025] At step 206, background contextual information is defined in the chest x-ray image. For the nodule feature extraction, background contextual information refers to prominent background structures inside lung regions that complicate the detection of genuine nodules. [0026]- [0027] FIG. 3 illustrates a method of vessel tree propagation for defining background contextual information according to an embodiment of the present invention. The method of FIG. 3 uses a controlled marching process operating on a combination of the raw chest x-ray image and a gradient image of the chest x-ray image. The method of FIG. 3, assumes that both lung lobes are pre-segmented and labeled. The lungs can be segmented using any well-known lung segmentation algorithm. As illustrated in FIG. 3, at step 302, initial vessel tree templates are generated. The method starts from a pair of initial template shapes that are placed at the low inside boundaries of the segmented lung lobes. The initial vessel tree template shapes are a pair of binary blocks with a predefined shape that are refined and placed at the low inner boundaries of segmented lung masks. Zhang: Zhang: [0014]-[0023] SLAM algorithm requires the following elements PNG media_image3.png 810 784 media_image3.png Greyscale PNG media_image4.png 836 593 media_image4.png Greyscale ) 15. extracting a feature vector representing features of the grown anomalous object; and labeling the feature vector as being of interest or not of interest based on prior knowledge of the positions of objects that are of interest in the sensor data field. (Hong: [0028] Returning to FIG. 3, at step 304, the vessel tree region front is propagated. From the initial vessel tree templates, vessel tree regions are labeled progressively using a front propagation algorithm at multiple confidence intervals. The motivation behind the idea of front propagation is to control the formation of vessel trees and to generate contextual information for the later feature extraction process, instead of a plain pixel classification using intensity and/or texture features. In the front propagation process, only pixels that are within a small neighborhood of the previously propagated vessel tree regions are evaluated for possible propagation. [0032] According to an embodiment of the present invention, a set of four features are derived by analyzing the properties of the region enclosed by vessel trees and covered by the extended region of interest of a nodule candidate. In order to extract these features for each nodule candidate, a region of interest is estimated for each candidate. The region of interest for each candidate is a circular region approximately covering the candidate. As described above, the size and location of the candidate is estimated in the candidate generation algorithm. The extended covering region of interest for each candidate is defined as the circular region that is an expansion of the original circular region of interest to twice the size of the original region of interest. Within the defined region of interest, the following four features (i.e., first, second, third, and fourth features, respectively) are calculated: [0033]-[0036] 1. a regular shape feature of an empty region at the highest possible confidence level where the candidate point is in the background region; 2. the average propagation distance of vessel tree pixels next to the empty region at the highest possible confidence level; 3. the relative size of the empty region with respect to the covering region of interest at the possible highest confidence level; and 4. the weighted sum of confidence levels of boundary pixels in vessel tree regions that are next to the empty region. [0037] Note that even though the first, second, third, and fourth features are computed within a local region of interest, they actually represent the relationships between regions of interest and overall vessel trees, which are of global in nature. Although, four features are described herein, the present invention is not limited thereto. For example, more subtle relationships between the vessel tree regions and nodules can be derived and extracted as features. [0038] At step 210, false positive nodule candidates and genuine nodules are detected based on the extracted features. For example, each of the features extracted for each candidate nodule based on the background contextual information, such as the first, second, third, and fourth features described above, can be compared to a corresponding threshold in order to determine whether each nodule candidate is a false positive or a genuine nodule. This detection of false positives and genuine nodules can confirm the presence of actual nodules, while eliminating false positives erroneously detected using an automatic nodule detection algorithm. These features can also be used as inputs to other classification schemes to differentiate genuine nodules from false positives. For example, these features can be used to train a learning base classifier to differentiate genuine nodules from false positives. [0039] FIG. 7; Zhang: [0032]-[0036], [0032] [0032] In an embodiment, said fixing the three-dimensional point position and optimizing all camera orientations in the local window includes: assuming that there is already a linear acceleration â and a rotation speed {circumflex over (ω)} measured in a local coordinate system, and a real linear acceleration is a=â−ba+na, a real rotation speed is ω={circumflex over (ω)}−bω+nω, na˜N(0,σn a 2I), nω˜N(0,σn ω 2I) are Gaussian noise of inertial measurement data, I is a 3×3 identity matrix, ba and bωare respectively offsets of the linear acceleration and the rotation speed with time, extending a state of a camera motion to be: s=[qT pT vT ba T bω T]T, where v is a linear velocity in a global coordinate system, a continuous-time motion equation of all states is: PNG media_image2.png 404 585 media_image2.png Greyscale ) Consider Claim 16. The combination of Hong and Carlson teaches: 16. (Original) The method of claim 14 further comprising for the classifier trained on sensor field data at each background suppression level, generating an effectiveness score based on the number of correct and incorrect object classifications made by that classifier. (Examiner Note: the thresholding operation to confirm actual nodules and eliminate false positives acts as an effectiveness score in the broadest reasonable interpretation to one of ordinary skill in the art at the time of filing Hong: [0038] At step 210, false positive nodule candidates and genuine nodules are detected based on the extracted features. For example, each of the features extracted for each candidate nodule based on the background contextual information, such as the first, second, third, and fourth features described above, can be compared to a corresponding threshold in order to determine whether each nodule candidate is a false positive or a genuine nodule. This detection of false positives and genuine nodules can confirm the presence of actual nodules, while eliminating false positives erroneously detected using an automatic nodule detection algorithm. These features can also be used as inputs to other classification schemes to differentiate genuine nodules from false positives. For example, these features can be used to train a learning base classifier to differentiate genuine nodules from false positives. [0039] FIG. 7 illustrates exemplary false positives and genuine nodules covering vessel regions in chest x-ray images. As illustrated in FIG. 7, image 710 is a chest x-ray image showing the propagated vessel regions 712, false positive nodule candidates 714, and genuine nodules 716. Image 720 is a chest x-ray image showing the propagated vessel regions 722 and false positive nodule candidates 724. [0028]) Consider Claim 17. The combination of Hong and Carlson teaches: 17. (Original) The method of claim 16 wherein the classifier output is a real number that is a rating as to whether the input object is of interest. (Examiner Note: the confidence value computation serves a real number classifier output on whether object of interest is rated accurately Hong: [0038] At step 210, false positive nodule candidates and genuine nodules are detected based on the extracted features. For example, each of the features extracted for each candidate nodule based on the background contextual information, such as the first, second, third, and fourth features described above, can be compared to a corresponding threshold in order to determine whether each nodule candidate is a false positive or a genuine nodule. This detection of false positives and genuine nodules can confirm the presence of actual nodules, while eliminating false positives erroneously detected using an automatic nodule detection algorithm. These features can also be used as inputs to other classification schemes to differentiate genuine nodules from false positives. For example, these features can be used to train a learning base classifier to differentiate genuine nodules from false positives. Hong: [0028] Returning to FIG. 3, at step 304, the vessel tree region front is propagated. From the initial vessel tree templates, vessel tree regions are labeled progressively using a front propagation algorithm at multiple confidence intervals. The motivation behind the idea of front propagation is to control the formation of vessel trees and to generate contextual information for the later feature extraction process, instead of a plain pixel classification using intensity and/or texture features. In the front propagation process, only pixels that are within a small neighborhood of the previously propagated vessel tree regions are evaluated for possible propagation. If a pixel satisfies the confidence level constraint, the pixel is propagated. Let I(x,y) be the raw intensity image, G(x,y) be the gradient image, the confidence value C(x,y) at a pixel (x,y) is computed as: PNG media_image5.png 86 970 media_image5.png Greyscale where K(x,y) represents the accumulated distance at (x,y) along the trace of front propagation, a<1 and b<1 are weighting parameters which are set to 0.4 and 0.2, respectively. Multiple levels of confidence are pre-established as control thresholds. At each level of control thresholds, the fronts of current vessel tree regions are propagated as long as there are a sufficient number of front pixels with their confidence value larger than the control threshold. FIG. 5 illustrates pseudo-code 500 for implementation the front propagation algorithm according to an embodiment of the present invention. As shown the pseudo-code 500, four confidence levels can be used for propagation of the vessel tree region front in an advantageous implementation. [0030] Returning to FIG. 2, at step 208, features are extracted for the nodule candidates based on the background contextual information. In conventional nodule feature extraction algorithms described above, a nodule is either explicitly or implicitly modeled as an overlay of two separate intensity formations: (1) the round shaped nodule blob and (2) the lung background which may contain different structures. Except for a few large and/or dense nodules, genuine nodules typically exhibit as weak round shaped blobs added to the underlying background structures. Feature computation within a limited region of interest of a nodule, such as the peak support region used by the matching filtering based techniques and the exploration ring range used by the adaptive ring filtering based techniques, is not able to reveal the subtle difference between genuine nodules and false positive background structures in image properties. As observed by the present inventors, it is actually the relationship between a nodule and the underlying background structures (e.g., vessel trees) that is effective in differentiating a false positive from a genuine nodule. Therefore, under background vessel tree context, the nodule feature extraction method focuses on assessing the relationship of a susceptible nodule with the labeled vessel tree structures, according to an embodiment of the present invention.)) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAHMINA N ANSARI whose telephone number is (571)270-3379. The examiner can normally be reached on IFP Flex - Monday through Friday 9 to 5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O' NEAL MISTRY can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. TAHMINA N. ANSARI Examiner Art Unit 2672 2672 February 18, 2026 /TAHMINA N ANSARI/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Mar 25, 2022
Application Filed
Dec 04, 2024
Non-Final Rejection — §102, §103
Mar 04, 2025
Response Filed
May 09, 2025
Final Rejection — §102, §103
Aug 12, 2025
Response after Non-Final Action
Aug 22, 2025
Request for Continued Examination
Aug 25, 2025
Response after Non-Final Action
Aug 28, 2025
Non-Final Rejection — §102, §103
Dec 04, 2025
Response Filed
Feb 21, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586249
PROCESSING APPARATUS, PROCESSING METHOD, AND STORAGE MEDIUM FOR CALIBRATING AN IMAGE CAPTURE APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12586354
TRAINING METHOD, APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR A MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12573083
COMPUTER-READABLE RECORDING MEDIUM STORING OBJECT DETECTION PROGRAM, DEVICE, AND MACHINE LEARNING MODEL GENERATION METHOD OF TRAINING OBJECT DETECTION MODEL TO DETECT CATEGORY AND POSITION OF OBJECT
2y 5m to grant Granted Mar 10, 2026
Patent 12548297
IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT BASED ON FEATURE AND DISTRIBUTION CORRELATION
2y 5m to grant Granted Feb 10, 2026
Patent 12524504
METHOD AND DATA PROCESSING SYSTEM FOR PROVIDING EXPLANATORY RADIOMICS-RELATED INFORMATION
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+17.9%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 868 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month