Prosecution Insights
Last updated: April 19, 2026
Application No. 18/293,704

SECURITY CHECK CT OBJECT RECOGNITION METHOD AND APPARATUS

Non-Final OA §103§Other
Filed
Jan 30, 2024
Examiner
ALLEN, LUCIUS CAMERON GREE
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Tsinghua University
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
27 granted / 38 resolved
+9.1% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
20 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103 §Other
DETAILED ACTION Notice of AIA Status The present application is being examined under the AIA the first inventor to file provisions. Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statements (IDS) submitted on 01/30/2024, 03/03/2025, 08/06/2025, and 02/11/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 14, 18, and 19 are rejected under 35 U.S.C 103 as being unpatentable over Naidu (US 20140010437 A1) hereafter referenced as Naidu in view of Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) hereafter referenced as Mohammadi and Chen et al. (US 20210049397 A1) hereafter referenced as Chen. Regarding claim 1, Naidu teaches a method of identifying at least one target object for a security inspection computed tomography (CT), comprising (Fig. 1, Paragraph [0023]- Naidu discloses one or more systems and/or techniques for separating a compound object representation into sub-objects in image data generated by subjecting one or more objects to imaging using an imaging apparatus (e.g., a computed tomography (CT) image of a piece of luggage under inspection at a security station at an airport) are provided herein.): performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views (Fig. 3, Paragraph [0048]- Naidu discloses the Eigen projector 302 is also configured to convert the three-dimensional image data 156 indicative of the potential compound object into one or more two-dimensional Eigen projections 350 indicative of the potential compound object and to record a correspondence 351 between the three-dimensional image data and the two-dimensional Eigen projection 350.); performing a target identification on a plurality of two-dimensional views to obtain a set of two-dimensional semantic descriptions of the at least one target object, wherein the plurality of two-dimensional views comprise the plurality of two-dimensional dimension-reduced views (Fig. 3, Paragraph [0052]- Naidu discloses the projection eroder 304 may repeat a similar adaptive erosion technique on a plurality of pixels to identify spaces, or divides, in the compound object. In this way, one or more portions of the compound object may be divided to reveal one or more sub-objects (e.g., each "group" of pixels corresponding to a sub-object).); and performing a dimension increase on the set of two-dimensional semantic descriptions to obtain a three-dimensional recognition result of the at least one target object (Fig. 3, Paragraph [0055]- Naidu discloses the compound object splitter 126 further comprises a back-projector 310 configured to receive the pruned and segmented Eigen projection 356 and to back-project the two-dimensional Eigen projection 356 into three-dimensional image data indicative of the sub-objects 160.), wherein the performing a dimension increase on the set of two-dimensional semantic descriptions to obtain a three-dimensional recognition result of the at least one target object comprises: mapping the set of two-dimensional semantic descriptions to a three-dimensional space by using a back-projection method (Fig. 3, Paragraph [0055]- Naidu discloses the compound object splitter 126 further comprises a back-projector 310 configured to receive the pruned and segmented Eigen projection 356 and to back-project the two-dimensional Eigen projection 356 into three-dimensional image data indicative of the sub-objects 160.), Naidu fails to explicitly teach so as to obtain a three-dimensional probability map. However, Mohammadi explicitly teaches so as to obtain a three-dimensional probability map (Fig. 2d, Page 5 Paragraph [0002]- Mohammadi discloses after back-projection, the group average of the resulting masks was calculated revealing a three-dimensional probability map (Fig. 2d, middle).); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Mohammadi so as to obtain a three-dimensional probability map. Wherein having Naidu’s system for compound object separation wherein so as to obtain a three-dimensional probability map. The motivation behind the modification would have been to allow for more information to be obtained by creating a probability map, since both Naidu and Mohammadi are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Mohammadi’s system wherein improved the creation of a probability map. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) Page 5 Paragraph [0002]. Naidu in view of Mohammadi fails to explicitly teach and performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object However, Chen explicitly teaches and performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object (Fig. 5, Paragraph [0102]- Chen discloses the terminal performs three-dimensional fusion convolution on the three-dimensional distribution feature map, to obtain a three-dimensional segmentation probability map. Further in Fig. 5, Paragraph [0103]- Chen discloses the three-dimensional segmentation probability map 1005 is used for indicating a probability that each pixel in the three-dimensional medical image belongs to a foreground region and/or a probability that each pixel in the three-dimensional medical image belongs to a background region. The foreground region is a region in which the target organ is located, and the background region is a region without the target organ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. Wherein having Naidu’s system for compound object separation wherein performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Regarding claim 3, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 1, Naidu further teaches wherein the mapping the set of two-dimensional semantic descriptions to a three-dimensional space by using a back-projection method so as to obtain a three-dimensional probability map comprises: mapping the set of two-dimensional semantic descriptions to the three-dimensional space by voxel driving or pixel driving so as to obtain a semantic feature matrix (Fig. 3, Paragraph [0055]- Naidu discloses the back-projector 310 is configured to reverse map the data from two-dimensional Eigen space into three-dimensional image space utilizing the correspondence 351 between the three-dimensional image data and the two-dimensional Eigen projection 356), Naidu in view of Mohammadi fails to explicitly teach and compressing the semantic feature matrix into the three-dimensional probability map. However, Chen explicitly teaches and compressing the semantic feature matrix into the three-dimensional probability map (Fig. 5, Paragraph [0102]- Chen discloses the terminal performs three-dimensional fusion convolution on the three-dimensional distribution feature map, to obtain a three-dimensional segmentation probability map.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen compressing the semantic feature matrix into the three-dimensional probability map. Wherein having Naidu’s system for compound object separation wherein compressing the semantic feature matrix into the three-dimensional probability map. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Regarding claim 14, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 1, Naidu in view of Mohammadi fails to explicitly teach wherein the performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views comprises: setting a plurality of directions for the three-dimensional CT data However, Chen explicitly teaches wherein the performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views comprises: setting a plurality of directions for the three-dimensional CT data (Fig. 3, Paragraph [0055]- Chen discloses the terminal performs slicing on the three-dimensional image according to three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images of an x axis, two-dimensional slice images of a y axis, and two-dimensional slice images of a z axis.); and projecting or rendering according to the plurality of directions (Fig. 4, Paragraph [0067]- Chen discloses in the method provided in some embodiments, slicing is performed on an obtained three-dimensional image according to the three directional planes in which three-dimensional coordinate axes are located, to obtain two-dimensional slice images corresponding to three directional planes, and then two-dimensional distribution probability maps corresponding to the three directional planes are obtained by using three segmentation models corresponding to the three directional planes, so that a terminal implements two-dimensional semantic segmentation on a three-dimensional medical image.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen wherein the performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views comprises: setting a plurality of directions for the three-dimensional CT data Wherein having Naidu’s system for compound object separation wherein the performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views comprises: setting a plurality of directions for the three-dimensional CT data The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Regarding claim 18, Naidu teaches an apparatus of identifying at least one target object for a security inspection computed tomography (CT) (Fig. 1, Paragraph [0023]- Naidu discloses one or more systems and/or techniques for separating a compound object representation into sub-objects in image data generated by subjecting one or more objects to imaging using an imaging apparatus (e.g., a computed tomography (CT) image of a piece of luggage under inspection at a security station at an airport) are provided herein.), the apparatus comprising a processor (Fig. 1, Paragraph [0008]- Naidu discloses a computer readable storage device comprising computer executable instructions that when executed via a microprocessor perform a method is provided), and a non-transitory machine-readable storage medium storing a program that when executed by the processor (Fig. 1, Paragraph [0008]- Naidu discloses a computer readable storage device comprising computer executable instructions that when executed via a microprocessor perform a method is provided), causes the processor to: perform a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views (Fig. 3, Paragraph [0048]- Naidu discloses the Eigen projector 302 is also configured to convert the three-dimensional image data 156 indicative of the potential compound object into one or more two-dimensional Eigen projections 350 indicative of the potential compound object and to record a correspondence 351 between the three-dimensional image data and the two-dimensional Eigen projection 350.); perform a target identification on a plurality of two-dimensional views to obtain a set of two-dimensional semantic descriptions of the at least one target object, wherein the plurality of two-dimensional views comprise the plurality of two-dimensional dimension-reduced views (Fig. 3, Paragraph [0052]- Naidu discloses the projection eroder 304 may repeat a similar adaptive erosion technique on a plurality of pixels to identify spaces, or divides, in the compound object. In this way, one or more portions of the compound object may be divided to reveal one or more sub-objects (e.g., each "group" of pixels corresponding to a sub-object), and perform a dimension increase on the set of two-dimensional semantic descriptions to obtain a three-dimensional recognition result of the at least one target object (Fig. 3, Paragraph [0055]- Naidu discloses the compound object splitter 126 further comprises a back-projector 310 configured to receive the pruned and segmented Eigen projection 356 and to back-project the two-dimensional Eigen projection 356 into three-dimensional image data indicative of the sub-objects 160.), wherein to perform a dimension increase on the set of two-dimensional semantic descriptions to obtain a three-dimensional recognition result of the at least one target object, the program, when executed by the processor, causes the processor to: map the set of two-dimensional semantic descriptions to a three-dimensional space by using a back-projection method (Fig. 3, Paragraph [0055]- Naidu discloses the compound object splitter 126 further comprises a back-projector 310 configured to receive the pruned and segmented Eigen projection 356 and to back-project the two-dimensional Eigen projection 356 into three-dimensional image data indicative of the sub-objects 160.), Naidu fails to explicitly teach so as to obtain a three- dimensional probability map. However, Mohammadi explicitly teaches so as to obtain a three- dimensional probability map (Fig. 2d, Page 5 Paragraph [0002]- Mohammadi discloses after back-projection, the group average of the resulting masks was calculated revealing a three-dimensional probability map (Fig. 2d, middle).); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu of an apparatus of identifying at least one target object for a security inspection computed tomography (CT), the apparatus comprising a processor, and a non-transitory machine-readable storage medium storing a program that when executed by the processor, causes the processor to: perform a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Mohammadi so as to obtain a three-dimensional probability map. Wherein having Naidu’s system for compound object separation wherein so as to obtain a three-dimensional probability map. The motivation behind the modification would have been to allow for more information to be obtained by creating a probability map, since both Naidu and Mohammadi are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Mohammadi’s system wherein improved the creation of a probability map. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) Page 5 Paragraph [0002]. Naidu in view of Mohammadi fails to explicitly teach and perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. However, Chen explicitly teaches and perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object (Fig. 5, Paragraph [0102]- Chen discloses the terminal performs three-dimensional fusion convolution on the three-dimensional distribution feature map, to obtain a three-dimensional segmentation probability map. Further in Fig. 5, Paragraph [0103]- Chen discloses the three-dimensional segmentation probability map 1005 is used for indicating a probability that each pixel in the three-dimensional medical image belongs to a foreground region and/or a probability that each pixel in the three-dimensional medical image belongs to a background region. The foreground region is a region in which the target organ is located, and the background region is a region without the target organ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi of an apparatus of identifying at least one target object for a security inspection computed tomography (CT), the apparatus comprising a processor, and a non-transitory machine-readable storage medium storing a program that when executed by the processor, causes the processor to: perform a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. Wherein having Naidu’s system for compound object separation wherein perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Regarding claim 19, Naidu teaches a non-transitory machine-readable storage medium having a program thereon (Fig. 1, Paragraph [0008]- Naidu discloses a computer readable storage device comprising computer executable instructions that when executed via a microprocessor perform a method is provided), wherein the program, when executed by a processor, causes a computer to: perform a dimension reduction on three-dimensional computed tomography (CT) data to generate a plurality of two-dimensional dimension-reduced views (Fig. 3, Paragraph [0048]- Naidu discloses the Eigen projector 302 is also configured to convert the three-dimensional image data 156 indicative of the potential compound object into one or more two-dimensional Eigen projections 350 indicative of the potential compound object and to record a correspondence 351 between the three-dimensional image data and the two-dimensional Eigen projection 350.); perform a target identification on a plurality of two-dimensional views to obtain a set of two-dimensional semantic descriptions of the at least one target object, wherein the plurality of two-dimensional views comprise the plurality of two-dimensional dimension- reduced views (Fig. 3, Paragraph [0052]- Naidu discloses the projection eroder 304 may repeat a similar adaptive erosion technique on a plurality of pixels to identify spaces, or divides, in the compound object. In this way, one or more portions of the compound object may be divided to reveal one or more sub-objects (e.g., each "group" of pixels corresponding to a sub-object).); and perform a dimension increase on the set of two-dimensional semantic descriptions to obtain a three-dimensional recognition result of the at least one target object (Fig. 3, Paragraph [0055]- Naidu discloses the compound object splitter 126 further comprises a back-projector 310 configured to receive the pruned and segmented Eigen projection 356 and to back-project the two-dimensional Eigen projection 356 into three-dimensional image data indicative of the sub-objects 160.), wherein the program, when executed by the processor, causes the computer to: map the set of two-dimensional semantic descriptions to a three-dimensional space by using a back-projection method (Fig. 3, Paragraph [0055]- Naidu discloses the compound object splitter 126 further comprises a back-projector 310 configured to receive the pruned and segmented Eigen projection 356 and to back-project the two-dimensional Eigen projection 356 into three-dimensional image data indicative of the sub-objects 160.), Naidu fails to explicitly teach so as to obtain a three- dimensional probability map. However, Mohammadi explicitly teaches so as to obtain a three- dimensional probability map (Fig. 2d, Page 5 Paragraph [0002]- Mohammadi discloses after back-projection, the group average of the resulting masks was calculated revealing a three-dimensional probability map (Fig. 2d, middle).); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu of a non-transitory machine-readable storage medium having a program thereon, wherein the program, when executed by a processor, causes a computer to: perform a dimension reduction on three-dimensional computed tomography (CT) data to generate a plurality of two-dimensional dimension-reduced views; perform a target identification on a plurality of two-dimensional views to obtain a set of two-dimensional semantic descriptions of the at least one target object, wherein the plurality of two-dimensional views comprise the plurality of two-dimensional dimension- reduced views with the teachings of Mohammadi so as to obtain a three-dimensional probability map. Wherein having Naidu’s system for compound object separation wherein so as to obtain a three-dimensional probability map. The motivation behind the modification would have been to allow for more information to be obtained by creating a probability map, since both Naidu and Mohammadi are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Mohammadi’s system wherein improved the creation of a probability map. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) Page 5 Paragraph [0002]. Naidu in view of Mohammadi fails to explicitly teach and perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. However, Chen explicitly teaches and perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object (Fig. 5, Paragraph [0102]- Chen discloses the terminal performs three-dimensional fusion convolution on the three-dimensional distribution feature map, to obtain a three-dimensional segmentation probability map. Further in Fig. 5, Paragraph [0103]- Chen discloses the three-dimensional segmentation probability map 1005 is used for indicating a probability that each pixel in the three-dimensional medical image belongs to a foreground region and/or a probability that each pixel in the three-dimensional medical image belongs to a background region. The foreground region is a region in which the target organ is located, and the background region is a region without the target organ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi of a non-transitory machine-readable storage medium having a program thereon, wherein the program, when executed by a processor, causes a computer to: perform a dimension reduction on three-dimensional computed tomography (CT) data to generate a plurality of two-dimensional dimension-reduced views; perform a target identification on a plurality of two-dimensional views to obtain a set of two-dimensional semantic descriptions of the at least one target object, wherein the plurality of two-dimensional views comprise the plurality of two-dimensional dimension-reduced views with the teachings of Chen perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. Wherein having Naidu’s system for compound object separation wherein perform a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Claims 6-8, and 13 are rejected under 35 U.S.C 103 as being unpatentable over Naidu (US 20140010437 A1) hereafter referenced as Naidu in view of Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) hereafter referenced as Mohammadi, Chen et al. (US 20210049397 A1) hereafter referenced as Chen, and Lay et al. (US 20160328855 A1) hereafter referenced as Lay. Regarding claim 6, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 1, Naidu in view of Mohammadi fails to explicitly teach or a deep learning method, so as to obtain a set of three-dimensional image semantic descriptions as the three-dimensional recognition result. However, Chen explicitly teaches or a deep learning method, so as to obtain a set of three-dimensional image semantic descriptions as the three-dimensional recognition result (Fig. 1, Paragraph [0003]- Chen discloses a shape or volume change of human organs or tissues has an important implication for clinical diagnosis. Image regions in which the human organs or tissues are located in the medical image can be obtained by performing semantic segmentation on the medical image by using a deep learning model.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen a deep learning method, so as to obtain a set of three-dimensional image semantic descriptions as the three-dimensional recognition result. Wherein having Naidu’s system for compound object separation wherein a deep learning method, so as to obtain a set of three-dimensional image semantic descriptions as the three-dimensional recognition result. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Naidu in view of Mohammadi and Chen fails to explicitly teach wherein the performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object comprises: performing the feature extraction on the three-dimensional probability map by using at least one or a combination of an image processing method, a classic machine learning method. However, Lay explicitly teaches wherein the performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object comprises: performing the feature extraction on the three-dimensional probability map by using at least one or a combination of an image processing method (Fig. 1, Paragraph [0040]- Lay discloses intensity-based thresholding can be performed in the medical image data prior to applying the trained voxel classifier. This allows the trained voxel classifier to only consider sufficiently bright voxels whose intensities are above a certain intensity threshold. (wherein thresholding is a classical image processing method)), a classic machine learning method (Fig. 2, Paragraph [0033]- Lay discloses the voxel classifier is a machine learning based classifier trained based on image-based features and landmark-based features extracted from annotated training images. The trained voxel classifier can be a Random Forest classifier or a probabilistic boosting tree (PBT) classifier with boosted decision tree, but the present invention is not limited thereto.), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Lay wherein the performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object comprises: performing the feature extraction on the three-dimensional probability map by using at least one or a combination of an image processing method, a classic machine learning method. Wherein having Naidu’s system for compound object separation wherein the performing a feature extraction on the three-dimensional probability map to obtain the three-dimensional recognition result of the at least one target object comprises: performing the feature extraction on the three-dimensional probability map by using at least one or a combination of an image processing method, a classic machine learning method. The motivation behind the modification would have been to allow for better visualization and accuracy, since both Naidu and Lay are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Lay’s system wherein improved accuracy and visualization. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Lay et al. (US 20160328855 A1) Paragraph [0003 and 0055]. Regarding claim 7, Naidu in view of Mohammadi, Chen, and Lay teaches the method of identifying the at least one target object for the security inspection CT according to claim 6, Naidu in view of Mohammadi fails to explicitly teach wherein a binarization is performed on the three-dimensional probability map to obtain a three-dimensional binary map. However, Chen explicitly teaches wherein a binarization is performed on the three-dimensional probability map to obtain a three-dimensional binary map (Fig. 10, Paragraph [0108]- Chen discloses if a probability that a pixel belongs to the foreground pixel is 80%, and a probability that the pixel belongs to the background pixel is 20%, a maximum probability category of the pixel is the foreground pixel. In some embodiments, in the three-dimensional distribution binary image, the foreground pixel is represented by 1, and the background pixel is represented by 0.); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen wherein a binarization is performed on the three-dimensional probability map to obtain a three-dimensional binary map. Wherein having Naidu’s system for compound object separation wherein a binarization is performed on the three-dimensional probability map to obtain a three-dimensional binary map. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Naidu in view of Mohammadi and Chen fails to explicitly teach a connected component analysis is performed on the three-dimensional binary map to obtain at least one connected component; and the set of three-dimensional image semantic descriptions is generated for the at least one connected component. However, Lay explicitly teaches a connected component analysis is performed on the three-dimensional binary map to obtain at least one connected component (Fig. 5, Paragraph [0045]- Lay discloses this results in a binary mask including cortical bones and contrasted structures. The aorta and vertebrae tend to be loosely connected by only a few voxels. Next, a morphological erosion is performed to disconnect the aorta from the vertebrae, leaving the aorta as a single connected component.); and the set of three-dimensional image semantic descriptions is generated for the at least one connected component (Fig. 5, Paragraph [0045]- Lay discloses then, each remaining connected component (after morphological erosion) is classified as aorta or not aorta. This can be performed by evaluating the voxels in each connected component with a trained classifier. The aorta connected components are then dilated back to their original size.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Lay a connected component analysis is performed on the three-dimensional binary map to obtain at least one connected component; and the set of three-dimensional image semantic descriptions is generated for the at least one connected component. Wherein having Naidu’s system for compound object separation wherein a connected component analysis is performed on the three-dimensional binary map to obtain at least one connected component; and the set of three-dimensional image semantic descriptions is generated for the at least one connected component. The motivation behind the modification would have been to allow for better visualization and accuracy, since both Naidu and Lay are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Lay’s system wherein improved accuracy and visualization. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Lay et al. (US 20160328855 A1) Paragraph [0003 and 0055]. Regarding claim 8, Naidu in view of Mohammadi, Chen, and Lay teaches the method of identifying the at least one target object for the security inspection CT according to claim 7, Naidu in view of Mohammadi and Chen fails to explicitly teach wherein performing the connected component analysis comprises: performing a connected component labeling on the three-dimensional binary map, and performing a mask operation on each labeled region to obtain the at least one connected component. However, Lay explicitly teaches wherein performing the connected component analysis comprises: performing a connected component labeling on the three-dimensional binary map (Fig. 5, Paragraph [0045]- Lay discloses then, each remaining connected component (after morphological erosion) is classified as aorta or not aorta. This can be performed by evaluating the voxels in each connected component with a trained classifier.), and performing a mask operation on each labeled region to obtain the at least one connected component (Fig. 1, Paragraph [0055]- Lay discloses the vessel mask is a binary mask that includes only those voxels labeled as vessels in the vessel segmentation. The bone mask is a binary mask including only those voxels labeled as bone in the bone segmentation. Subtracting the vessel mask from the bone mask has the effect of removing any voxels that were classified as both vessel and bone from the bone mask.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Lay wherein performing the connected component analysis comprises: performing a connected component labeling on the three-dimensional binary map, and performing a mask operation on each labeled region to obtain the at least one connected component. Wherein having Naidu’s system for compound object separation wherein performing the connected component analysis comprises: performing a connected component labeling on the three-dimensional binary map, and performing a mask operation on each labeled region to obtain the at least one connected component. The motivation behind the modification would have been to allow for better visualization and accuracy, since both Naidu and Lay are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Lay’s system wherein improved accuracy and visualization. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Lay et al. (US 20160328855 A1) Paragraph [0003 and 0055]. Regarding claim 13, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 1, Naidu further teaches wherein performing the target identification on each of the plurality of two-dimensional views comprises: performing the target identification for two-dimensional images by using at least one or a combination of an image processing method (Fig. 3, Paragraph [0051]- Naidu discloses the compound object splitter 126 further comprises a projection eroder 304 (e.g., also referred to herein as a projection erosion component) which is configured to receive the two-dimensional Eigen projection 350. (wherein erosion is a classical image processing method)), Naidu in view of Mohammadi fails to explicitly teach or a deep learning method However, Chen explicitly teaches or a deep learning method (Fig. 1, Paragraph [0003]- Chen discloses a shape or volume change of human organs or tissues has an important implication for clinical diagnosis. Image regions in which the human organs or tissues are located in the medical image can be obtained by performing semantic segmentation on the medical image by using a deep learning model.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen a deep learning method. Wherein having Naidu’s system for compound object separation wherein a deep learning method. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Naidu in view of Mohammadi and Chen fails to explicitly teach a classic machine learning method. However, Lay explicitly teaches a classic machine learning method (Fig. 2, Paragraph [0033]- Lay discloses the voxel classifier is a machine learning based classifier trained based on image-based features and landmark-based features extracted from annotated training images. The trained voxel classifier can be a Random Forest classifier or a probabilistic boosting tree (PBT) classifier with boosted decision tree, but the present invention is not limited thereto. Further in Fig. 5, Paragraph [0047]- For each horizontal slice (axial slice), intensity thresholding is first performed to produce a binary mask of bright structures in that slice. A 2D connected component analysis is then performed on the binary mask for the slice. In the 2D connected component analysis, small connected components that are sufficiently circular are labeled as vessels.), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Lay a classic machine learning method. Wherein having Naidu’s system for compound object separation wherein a classic machine learning method. The motivation behind the modification would have been to allow for better visualization and accuracy, since both Naidu and Lay are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Lay’s system wherein improved accuracy and visualization. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Lay et al. (US 20160328855 A1) Paragraph [0003 and 0055]. Claim 9 is rejected under 35 U.S.C 103 as being unpatentable over Naidu (US 20140010437 A1) hereafter referenced as Naidu in view of Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) hereafter referenced as Mohammadi, Chen et al. (US 20210049397 A1) hereafter referenced as Chen, Lay et al. (US 20160328855 A1) hereafter referenced as Lay, and Shiroshima et al. (US 20230419605 A1) hereafter referenced as Shiroshima. Regarding claim 9, Naidu in view of Mohammadi, Chen, and Lay teaches the method of identifying the at least one target object for the security inspection CT according to claim 7, Naidu in view of Mohammadi and Chen fails to explicitly teach wherein the generating the set of three-dimensional image semantic descriptions for the at least one connected component comprises: extracting all probability values for each connected component. However, Lay explicitly teaches wherein the generating the set of three-dimensional image semantic descriptions for the at least one connected component comprises: extracting all probability values for each connected component (Fig. 2, Paragraph [0033]- Lay discloses the trained voxel classifier calculates a probability for each voxel that the voxel is a bone structure based on the landmark-based features and the image-based features extracted for that voxel, and labels each voxel in the CTA image as bone or non-bone, resulting in a segmented bone mask for the CTA image. In an advantageous embodiment, intensity-based thresholding can be performed in the CTA image prior to applying the trained voxel classifier.), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Lay wherein the generating the set of three-dimensional image semantic descriptions for the at least one connected component comprises: extracting all probability values for each connected component. Wherein having Naidu’s system for compound object separation wherein the generating the set of three-dimensional image semantic descriptions for the at least one connected component comprises: extracting all probability values for each connected component. The motivation behind the modification would have been to allow for better visualization and accuracy, since both Naidu and Lay are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Lay’s system wherein improved accuracy and visualization. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Lay et al. (US 20160328855 A1) Paragraph [0003 and 0055]. Naidu in view of Mohammadi, Chen, and Lay fails to explicitly teach performing a principal component analysis to obtain an analysis set and statistically generating a three-dimensional image semantic description by using the analysis set. However, Shiroshima explicitly teaches performing a principal component analysis to obtain an analysis set (Fig. 5, Paragraph [0042]- Shiroshima discloses the plane 52 may be defined, for example, by performing principal component analysis.), and statistically generating a three-dimensional image semantic description by using the analysis set (Fig. 5, Paragraph [0042]- Shiroshima discloses the virtual image generation unit 12 performs principal component analysis on the three-dimensional position of nearest neighbor feature points in the area 51. At this time, the virtual image generation unit 12 may define the plane containing first and second principal components, obtained as a result of performing principal component analysis, as the plane 52.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of N Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Shiroshima performing a principal component analysis to obtain an analysis set and statistically generating a three-dimensional image semantic description by using the analysis set. Wherein having Naidu’s system for compound object separation wherein performing a principal component analysis to obtain an analysis set and statistically generating a three-dimensional image semantic description by using the analysis set. The motivation behind the modification would have been to allow for better accuracy of estimated 3d information, since both Naidu and Shiroshima are systems that create three-dimensional information. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Shiroshima’s system wherein improved accuracy of estimated 3d information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Shiroshima et al. (US 20230419605 A1) Paragraph [0003]. Claims 10-11 are rejected under 35 U.S.C 103 as being unpatentable over Naidu (US 20140010437 A1) hereafter referenced as Naidu in view of Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) hereafter referenced as Mohammadi, Chen et al. (US 20210049397 A1) hereafter referenced as Chen, Lay et al. (US 20160328855 A1) hereafter referenced as Lay, and Cinnamon et al. (US 9996890 B1) hereafter referenced as Cinnamon. Regarding claim 10, Naidu in view of Mohammadi, Chen, and Lay teaches the method of identifying the at least one target object for the security inspection CT according to claim 6, Naidu further teaches wherein the set of three-dimensional image semantic descriptions comprises a category information (Fig. 1, Paragraph [0040]- Naidu discloses threat determiner 128 can be configured to receive image data for an object, which may comprise image data indicative of sub-objects 160 and/or image data 158 that was determined by the entry control 124 to merely be representative of a single item. The threat determiner 128 can also be configured to compare the image data to one or more pre-determined thresholds, corresponding to one or more potential threat objects.) and/or a confidence level (Fig. 1, Paragraph [0061]- Naidu discloses the probability that an object is a potential compound object is determined by calculating the average density and/or atomic number (e.g., if the examination apparatus is a multi-energy system) and a standard deviation. If the standard deviation is above a predefined threshold, the object may be considered a potential compound object and thus the acts herein described may be performed to split the potential compound object into one or more sub-objects.), in units of one or more of voxels (Fig. 3, Paragraph [0048]- Naidu discloses one or more voxels of the three-dimensional image data are recorded as being represented by, or associated with, a pixel of the two-dimensional Eigen projection 350 indicative of the potential compound object.), three-dimensional volumes of interest (Fig. 1, Paragraph [0033]- Naidu discloses volumetric data (e.g., which may be converted into three dimensional image space) of the object(s) 110 under examination may be acquired.), or three-dimensional CT images (Fig. 1, Paragraph [0034]- Naidu discloses an image extractor 120 is coupled to the data acquisition component 118, and is configured to receive the data 150 from the data acquisition component 118 and generate three-dimensional image data 152 (e.g., also referred to herein as a three-dimensional representation) indicative of and/or representative of the examined object(s) 110 using a suitable analytical, iterative, and/or other reconstruction technique (e.g., back-projection from projection space to image space, tomosynthesis reconstruction, etc.).); or the set of three-dimensional image semantic descriptions comprises at least one of a category information (Fig. 1, Paragraph [0040]- Naidu discloses threat determiner 128 can be configured to receive image data for an object, which may comprise image data indicative of sub-objects 160 and/or image data 158 that was determined by the entry control 124 to merely be representative of a single item. The threat determiner 128 can also be configured to compare the image data to one or more pre-determined thresholds, corresponding to one or more potential threat objects.), or a confidence level (Fig. 1, Paragraph [0040]- Naidu discloses threat determiner 128 can be configured to receive image data for an object, which may comprise image data indicative of sub-objects 160 and/or image data 158 that was determined by the entry control 124 to merely be representative of a single item. The threat determiner 128 can also be configured to compare the image data to one or more pre-determined thresholds, corresponding to one or more potential threat objects.), in units of three-dimensional volumes of interest (Fig. 1, Paragraph [0033]- Naidu discloses volumetric data (e.g., which may be converted into three dimensional image space) of the object(s) 110 under examination may be acquired.), and/or three-dimensional CT images (Fig. 1, Paragraph [0034]- Naidu discloses an image extractor 120 is coupled to the data acquisition component 118, and is configured to receive the data 150 from the data acquisition component 118 and generate three-dimensional image data 152 (e.g., also referred to herein as a three-dimensional representation) indicative of and/or representative of the examined object(s) 110 using a suitable analytical, iterative, and/or other reconstruction technique (e.g., back-projection from projection space to image space, tomosynthesis reconstruction, etc.).). Naidu in view of Mohammadi, Chen, and Lay fails to explicitly teach a position information of the at least one target object. However, Cinnamon explicitly teaches a position information of the at least one target object (Column 6, Lines [0015-16]- Fig. 1, Cinnamon discloses the regression layer outputs scores that indicate the position, width and height of a given bounding box), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Cinnamon a position information of the at least one target object. Wherein having Naidu’s system for compound object separation wherein a position information of the at least one target object. The motivation behind the modification would have been to allow for better classification of items within a scanned object, since both Naidu and Cinnamon are systems that scan a object to classify objects within the object. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Cinnamon’s system wherein improved the classification of items within a scanned object. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Cinnamon et al. (US 9996890 B1) Column 3 Lines [0038-55]. Regarding claim 11, Naidu in view of Mohammadi, Chen, Lay, and Cinnamon teaches the method of identifying the at least one target object for the security inspection CT according to claim 10, Naidu in view of Mohammadi, Chen, and Lay fails to explicitly teach wherein the position information comprises a three-dimensional bounding box. However, Cinnamon explicitly teaches wherein the position information comprises a three-dimensional bounding box (Column 6, Lines [0015-16]- Fig. 1, Cinnamon discloses the regression layer outputs scores that indicate the position, width and height of a given bounding box), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi, Chen, and Lay of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Cinnamon wherein the position information comprises a three-dimensional bounding box Wherein having Naidu’s system for compound object separation wherein the position information comprises a three-dimensional bounding box The motivation behind the modification would have been to allow for better classification of items within a scanned object, since both Naidu and Cinnamon are systems that scan a object to classify objects within the object. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Cinnamon’s system wherein improved the classification of items within a scanned object. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Cinnamon et al. (US 9996890 B1) Column 3 Lines [0038-55]. Claim 12 is rejected under 35 U.S.C 103 as being unpatentable over Naidu (US 20140010437 A1) hereafter referenced as Naidu in view of Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) hereafter referenced as Mohammadi, Chen et al. (US 20210049397 A1) hereafter referenced as Chen, Morton et al. (US 20100303287 A1) hereafter referenced as Morton. Regarding claim 12, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 1, Naidu in view of Mohammadi fails to explicitly teach wherein the set of two-dimensional semantic descriptions comprises a category information and/or a confidence level, in units of one or more of pixels, regions of interest, or two-dimensional images or the set of two-dimensional semantic descriptions comprises at least one of a category information, a confidence level, in units of regions of interest and/or two-dimensional images. However, Chen explicitly teaches wherein the set of two-dimensional semantic descriptions comprises a category information (Fig. 12, Paragraph [0156]- Chen discloses the terminal obtains a two-dimensional distribution binary image of the target organ through calculation according to a maximum probability category of each pixel in the distribution probability map.) and/or a confidence level (Fig. 3, Paragraph [0064]- Chen discloses the terminal invokes an adaptive fusion model to perform three-dimensional fusion on the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution binary image of the target object.), in units of one or more of pixels (Fig. 11, Paragraph [0134]- Chen discloses where p represents a probability that the pixel belongs to target pixels corresponding to a target organ, y represents a category, that is, y is 0 or 1, w.sub.fg represent a weight of a foreground category, w.sub.wg represents a weight of a background category, t.sub.i represents a quantity of pixels in the foreground of an i.sup.th sample image, n, represents a quantity of pixels in the entire i.sup.th sample image, N is a quantity of sample images of a batch size, and a weighted value is obtained by collecting statistics on a ratio of the foreground to the background in an sample image.), regions of interest (Fig. 8, Paragraph [0094]- Chen discloses the distribution probability map 804 indicates a probability that each pixel on the two-dimensional slice image belongs to a foreground region and/or a probability that each pixel on the two-dimensional slice image belongs to a background region. The foreground region is a region in which the target organ is located, and the background region is a region without the target organ.), or two-dimensional images (Fig. 5, Paragraph [0082]- Chen discloses the first segmentation model completes a process of performing semantic segmentation on the two-dimensional slice images of the x axis according to features such as a distribution location, a size, and a shape of the target organ in the three-dimensional medical image, thereby outputting a distribution probability map of the target organ on an x-axis directional plane.); or the set of two-dimensional semantic descriptions comprises at least one of a category information (Fig. 12, Paragraph [0156]- Chen discloses the terminal obtains a two-dimensional distribution binary image of the target organ through calculation according to a maximum probability category of each pixel in the distribution probability map.), a confidence level (Fig. 3, Paragraph [0064]- Chen discloses the terminal invokes an adaptive fusion model to perform three-dimensional fusion on the three distribution probability maps respectively corresponding to the x-axis directional plane, the y-axis directional plane, and the z-axis directional plane, to obtain a three-dimensional distribution binary image of the target object.), in units of regions of interest (Fig. 8, Paragraph [0094]- Chen discloses the distribution probability map 804 indicates a probability that each pixel on the two-dimensional slice image belongs to a foreground region and/or a probability that each pixel on the two-dimensional slice image belongs to a background region. The foreground region is a region in which the target organ is located, and the background region is a region without the target organ.) and/or two-dimensional images (Fig. 5, Paragraph [0082]- Chen discloses the first segmentation model completes a process of performing semantic segmentation on the two-dimensional slice images of the x axis according to features such as a distribution location, a size, and a shape of the target organ in the three-dimensional medical image, thereby outputting a distribution probability map of the target organ on an x-axis directional plane.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Chen wherein the set of two-dimensional semantic descriptions comprises a category information and/or a confidence level, in units of one or more of pixels, regions of interest, or two-dimensional images or the set of two-dimensional semantic descriptions comprises at least one of a category information, a confidence level, in units of regions of interest and/or two-dimensional images. Wherein having Naidu’s system for compound object separation wherein the set of two-dimensional semantic descriptions comprises a category information and/or a confidence level, in units of one or more of pixels, regions of interest, or two-dimensional images or the set of two-dimensional semantic descriptions comprises at least one of a category information, a confidence level, in units of regions of interest and/or two-dimensional images. The motivation behind the modification would have been to allow for more accurate detection, since both Naidu and Chen are systems that use CT to generate images. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Chen’s system wherein improved accuracy of detection information. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Chen et al. (US 20210049397 A1) Paragraph [0040-41]. Naidu in view of Mohammadi and Chen is silent to explicitly teach or a position information of the at least one target object. However, Morton explicitly teaches or a position information of the at least one target object (Fig. 1, Paragraph [0025]- Morton Discloses each parameter extractor being arranged to perform a different processing operation to determine a different parameter; one or more decision trees for constructing high level parameters by analyzing the identified low level parameters of the X-ray image; and a database searcher for mapping the X-ray image of the object as one of `threat-causing` or `clear` by using the constructed high level parameters of the X-ray image and predefined data stored in a database coupled with the database searcher. The parameter extractors are designed to operate on one of 2-dimensional images, 3-dimensional images and sinogram image data.), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Morton wherein or a position information of the at least one target object. Wherein having Naidu’s system for compound object separation wherein or a position information of the at least one target object. The motivation behind the modification would have been to allow for a more accurate CT images to be created, since both Naidu and Morton are systems that scan are use CT for screening of baggage. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Morton’s system wherein improved the accuracy of CT images. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Morton et al. (US 20100303287 A1) Paragraph [0123]. Claim 15 is rejected under 35 U.S.C 103 as being unpatentable over Naidu (US 20140010437 A1) hereafter referenced as Naidu in view of Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) hereafter referenced as Mohammadi, Chen et al. (US 20210049397 A1) hereafter referenced as Chen, Kaufman et al. (US 20070206008 A1) hereafter referenced as Kaufman. Regarding claim 15, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 14, Naidu in view of Mohammadi and Chen fails to explicitly teach wherein the plurality of directions are arbitrary directions and are not limited to a direction orthogonal to a traveling direction of an object during a detection process. However, Kaufman explicitly teaches wherein the plurality of directions are arbitrary directions and are not limited to a direction orthogonal to a traveling direction of an object during a detection process (Fig. 14, Paragraph [0195]- Kaufman discloses preferably involves a decomposition of the 3D rotation into a sequence of 2D slice shears. In a 2D slice shear, a volume slice (i.e., a plane of voxels along a major projection axis and parallel to any two axes) is merely shifted within its plane. A slice may be arbitrarily taken along any major projection axis. For example, FIG. 14 illustrates a y-slice shear. A 2D y-slice shear is preferably expressed as: x=x+ay z=z+by.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Kaufman wherein the plurality of directions are arbitrary directions and are not limited to a direction orthogonal to a traveling direction of an object during a detection process. Wherein having Naidu’s system for compound object separation wherein the plurality of directions are arbitrary directions and are not limited to a direction orthogonal to a traveling direction of an object during a detection process. The motivation behind the modification would have been to allow for better quality image, since both Naidu and Kaufman are systems that slice 3d data into 2d data. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Kaufman’s system wherein improved the performance quality flexibility and simplicity of the system. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Kaufman et al. (US 20070206008 A1) Paragraph [0018]. Claims 16-17 are rejected under 35 U.S.C 103 as being unpatentable over Naidu (US 20140010437 A1) hereafter referenced as Naidu in view of Mohammadi et al. (The Influence of Spatial Registration on Detection of Cerebral Asymmetries Using Voxel-Based Statistics of Fractional Anisotropy Images and TBSS) hereafter referenced as Mohammadi, Chen et al. (US 20210049397 A1) hereafter referenced as Chen, Zhang et al. (US 20150332498 A1) hereafter referenced as Zhang. Regarding claim 16, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 1, Naidu in view of Mohammadi and Chen fails to explicitly teach wherein the plurality of two-dimensional views further comprise a two-dimensional digital radiography (DR) image, and the two-dimensional DR image is acquired by a DR imaging device. However, Zhang explicitly teaches wherein the plurality of two-dimensional views further comprise a two-dimensional digital radiography (DR) image, and the two-dimensional DR image is acquired by a DR imaging device (Fig. 1, Paragraph [0076]- Zhang discloses the embodiment of the present disclosure proposes calculating a correlation between a column in the DR image obtained by the DR device and each column in the DR data extracted from the reconstructed three-dimensional image, and displaying a slice image corresponding to a column with the largest correlation on the screen together with the DR image.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Zhang wherein the plurality of two-dimensional views further comprise a two-dimensional digital radiography (DR) image, and the two-dimensional DR image is acquired by a DR imaging device. Wherein having Naidu’s system for compound object separation wherein the plurality of two-dimensional views further comprise a two-dimensional digital radiography (DR) image, and the two-dimensional DR image is acquired by a DR imaging device. The motivation behind the modification would have been to allow for a faster and more accurate system, since both Naidu and Zhang are systems that scan an object to classify objects within the object. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Zhang’s system wherein improved accuracy and speed of inspecting goods. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Zhang et al. (US 20150332498 A1) Paragraph [0023]. Regarding claim 17, Naidu in view of Mohammadi and Chen teaches the method of identifying the at least one target object for the security inspection CT according to claim 16, Naidu in view of Mohammadi and Chen fails to explicitly teach wherein the three-dimensional recognition result is projected onto the two-dimensional DR image and output as a recognition result of the two-dimensional DR image. However, Zhang explicitly teaches wherein the three-dimensional recognition result is projected onto the two-dimensional DR image and output as a recognition result of the two-dimensional DR image (Fig. 8, Paragraph [0083]- Zhang discloses the step of obtaining assistant DR data in the same angle of view as that of the DR image from the three-dimensional image comprises: projecting data of the three-dimensional image H(x,y,z) of the inspected object along a direction of the dimension y, to obtain DR data in the angle of view, wherein the data of the three-dimensional data H(x,y,z) has a dimensional size of X×Y×Z, a dimension X changes from 1 to X in a direction perpendicular to movement of a belt in a horizontal plane, a dimension Y changes from 1 to Y in a straight-up direction, and a dimension z changes from 1 to Z in a direction along the movement of the belt in the horizontal plane For example, the three-dimensional data H(x,y,z) is projected along the direction of the dimension y, to obtain two-dimensional data J(x,z) with reference to the above equation (6).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Naidu in view of Mohammadi and Chen of a method of identifying at least one target object for a security inspection computed tomography (CT), comprising: performing a dimension reduction on three-dimensional CT data to generate a plurality of two-dimensional dimension-reduced views with the teachings of Zhang wherein the three-dimensional recognition result is projected onto the two-dimensional DR image and output as a recognition result of the two-dimensional DR image. Wherein having Naidu’s system for compound object separation wherein the three-dimensional recognition result is projected onto the two-dimensional DR image and output as a recognition result of the two-dimensional DR image. The motivation behind the modification would have been to allow for a faster and more accurate system, since both Naidu and Zhang are systems that scan an object to classify objects within the object. Wherein Naidu’s system wherein improved accuracy of threat item detection, while Zhang’s system wherein improved accuracy and speed of inspecting goods. Please see Naidu et al. (US 20140010437 A1), Paragraph [0005 and 0057] and Zhang et al. (US 20150332498 A1) Paragraph [0023]. Allowable Subject Matter Claim 4 and dependent claim 5 are therefrom objected to as being dependent upon rejected base claim, claim 1, respectively but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 4, the prior arts fail to explicitly teach, wherein the voxel driving comprises: mapping each voxel in the three-dimensional CT data to a pixel in each two- dimensional view, querying and accumulating a two-dimensional semantic description information corresponding to the pixel, and generating the semantic feature matrix; and wherein the pixel driving comprises: mapping each pixel in the two-dimensional view to a straight line in the three-dimensional CT data, traversing each pixel in each two-dimensional view or each pixel in a region of interest, propagating a two-dimensional semantic description information corresponding to the pixel into the three-dimensional space along the straight line, and generating the semantic feature matrix, wherein the region of interest is given by the set of two-dimensional semantic descriptions, as claimed in claim 4. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. Mohan et al. (US 11568656 B2)- A system for generating a 3D segmentation of a target volume is provided. The system accesses views of an X-ray scan of a target volume. The system applies a 2D CNN to each view to generate a 2D multi-channel feature vector for each view. The system applies a space carver to generate a 3D channel volume for each channel based on the 2D multi-channel feature vectors. The system then applies a linear combining technique to the 3D channel volumes to generate a 3D multi-label map that represents a 3D segmentation of the target volume.....................Please see Fig. 1. Abstract. Zhang et al. (US 20150332448 A1)- Disclosed are object detection method, display methods and apparatuses. The method includes obtaining slice data of inspected luggage in the CT system; generating 3D volume data of objects in the luggage from the slice data; for each object, determining a semantic description including at least a quantifier description of the object based on the 3D volume data; and upon reception of a user selection of an object, presenting the semantic description of the selected object while displaying a 3D image of the object. The above solutions can create a 3D model for objects in the inspected luggage in a relatively accurate manner, and thus provide better basis for subsequent shape feature extraction and security inspection, and reduce omission factor....................Please see Fig. 1. Abstract. Joskowicz et al. (US 20160335785 A1)- There are provided a method of CT volume reconstruction based on a baseline sinogram obtained by a prior scanning an object in B directions, and a system thereof. The method comprises: a) obtaining initial partial sinogram by initial repeat scanning the object in b directions out of B directions, b being substantially less than B; b) comparing the baseline sinogram and the initial partial sinogram to assess, for each voxel associated with the object, a likelihood of change; e) using the assessed likelihood of change for generating configuration data informative, at least, of rays to be cast in a further repeat scan in an un-scanned direction; d) performing a repeat scan in the un-scanned direction in accordance with the generated configuration data, thereby obtaining partial sinogram, and using the partial sinogram for updating the assessed likelihood of change; e) repeating operations c) and d) until all directions have been scanned to yield respective partial sinograms; f) composing the baseline and the partial sinograms into a composed sinogram; and g) processing the composed sinograms into an image of the object......................Please see Fig. 1. Abstract. Litvin et al. (US 20110188751 A1)- Representations of an object can comprise two or more separate sub-objects, producing a compound object. Compound objects can affect the quality of object visualization and threat identification. As provided herein, a compound object can be separated into sub-objects based on object morphological properties (e.g., an object's shape, surface area). Further, a potential compound object can be split into sub-objects, for example, eroding one or more outer layers of volume space (e.g., voxels) from the potential compound object. Additionally, a volume of a representation of the sub-objects in an image can be reconstructed, for example, by generating sub-objects that have a combined volume approximate to that of the compound object. Furthermore, sub-objects, which can be parts of a same physical object, but may have been erroneously split, can be identified and merged using connectivity and compactness based techniques........................Please see Fig. 1. Abstract. PAGLIERONI et al. (US 20210049767 A1)- An automatic threat recognition (ATR) system is disclosed for scanning an article to recognize contraband items or items of interest contained within the article. The ATR system uses a CAT scanner to obtain a CT image scan of objects within the article, representing a plurality of 2D image slices of the article and its contents. Each 2D image slice includes information forming a plurality of voxels. The ATR system includes a computer and determines which voxels have a likelihood of representing materials of interest. It then aggregates those voxels to produce detected objects. The detected objects are further classified as items of interest vs. not of interest. The ATR system is based on learned parameters for a novel interaction of global and object context mechanisms. ATR system performance may be optimized by using jointly optimal global and object context parameters learned during training. The global context parameters may apply to the article as a whole and facilitate object detection. The object context parameters may apply to the individual object detections.........................Please see Fig. 1. Abstract. Basu et al. (US 20100246937 A1)- A method and system for producing images of at least one object of interest in a container. The method includes receiving three-dimensional volumetric scan data from a scan of the container, reconstructing a three-dimensional representation of the container from the three-dimensional volumetric scan data, and inspecting the three-dimensional representation to detect the at least one object of interest within the container. The method also includes re-projecting a two-dimensional image from one of the three-dimensional volumetric scan data and the three-dimensional representation, and identifying a first plurality of image elements in the two-dimensional image corresponding to a location of the at least one object of interest. The method further includes outputting the two-dimensional image with the first plurality of image elements highlighted...........................Please see Fig. 1. Abstract. Tsuyuki et al. (US 20060198490 A1)- An X-ray computed tomographic apparatus includes a gantry unit having a detector having X-ray detection element rows along a slice direction and an X-ray tube and acquiring projection data from a subject by helical scanning, an extraction unit extracting first and second projection data sets corresponding to first and second heartbeat periods from the acquired projection data, each of the first and second projection data sets covering an angle range required for the reconstruction of one frame image, a processing unit weighting each of the extracted first and second projection data sets with a weight corresponding to a data acquisition position and generating a third data set by combining the weighted first and second projection data sets, and a reconstruction processing unit reconstructing one frame image data set on the basis of the generated third data set...........................Please see Fig. 1. Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUCIUS C.G. ALLEN whose telephone number is (703)756-5987. The examiner can normally be reached Mon - Fri 8-5pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571)272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUCIUS CAMERON GREE ALLEN/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Jan 30, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §103, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597105
SEMANTIC-AWARE AUTO WHITE BALANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12579755
OVERLAYING AUGMENTED REALITY (AR) CONTENT WITHIN AN AR HEADSET COUPLED TO A MAGNIFYING LOUPE
2y 5m to grant Granted Mar 17, 2026
Patent 12541972
Computing Device and Method for Handling an Object in Recorded Images
2y 5m to grant Granted Feb 03, 2026
Patent 12536247
Roughness Compensation Method and System, Image Processing Device, and Readable Storage Medium
2y 5m to grant Granted Jan 27, 2026
Patent 12529684
INSPECTION DEVICE, INSPECTION METHOD, AND INSPECTION PROGRAM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+39.3%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month