Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Response to Amendment
Applicant’s remarks, see pages 14-15, filed 12/19/2025, with respect to drawing objections to Figs. 4 and 6 submitted in the non-final office action dated 11/12/2025, have been fully considered and are persuasive due to amendments in accordance with Examiner’s suggested corrections. Thus, the drawing objections to Figs. 4 and 6 have been withdrawn.
Applicant’s remarks, see page 15, filed 12/19/2025, with respect to objections to several paragraphs with minor informalities within the specification, submitted in the non-final office action dated 11/12/2025, have been fully considered and are persuasive due to amendments in accordance with Examiner’s suggested corrections. Thus, the objections of informalities within the specification have been withdrawn.
Applicant’s remarks, see page 15, filed 12/19/2025, with respect to objections to claims 2-4, 6-7, 9, 11, and 13-14 due to certain informalities submitted in the non-final office action dated 11/12/2025, have been fully considered and are persuasive due to amendments in accordance with Examiner’s suggested corrections. Thus, the objections to claims 2-4, 6-7, 9, 11, and 13-14 have been withdrawn.
Applicant’s remarks, see pages 15-16, filed 12/19/2025, with respect to rejections of claims 1-3, 5-8, 10, and 12-14 along with their dependent claims 4 and 11 under 35 U.S.C. 112(b) for failing to disclose corresponding structure that supports claim limitations that invoked 35 U.S.C. 112(f), submitted in the non-final office action dated 11/12/2025, have been fully considered and are persuasive due to amendments in accordance with Examiner’s suggested corrections of not reciting terms that invoke interpretations under 35 U.S.C. 112(f). Thus, these rejections of claims 1-3, 5-8, 10, and 12-14 along with their dependent claims 4 and 11 have been withdrawn.
Applicant’s remarks, see pages 16-17, filed 12/19/2025, with respect to rejections of claims 1-3, 5-8, 10, and 12-14 along with their dependent claims 4 and 11 under 35 U.S.C. 112(a) for failing to disclose corresponding structure that supports claim limitations that invoked 35 U.S.C. 112(f), submitted in the non-final office action dated 11/12/2025, have been fully considered and are persuasive due to amendments in accordance with Examiner’s suggested corrections of not reciting terms that invoke interpretations under 35 U.S.C. 112(f). Thus, these rejections of claims 1-3, 5-8, 10, and 12-14 along with their dependent claims 4 and 11 have been withdrawn.
Response to Arguments
Applicant’s remarks, see pages 17-22, filed 12/19/2025, with respect to rejections of independent claims 1 and 8, along with dependent claims 2-6 and 9-13, have been fully considered but are not persuasive. The office would like to bring to the applicant’s attention dependent claims 7 and 14 as therefrom objected to as being dependent upon rejected base claims, claim 1 and 8, respectively but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims.
The applicant argues on page 19, “claim 1 is patentable over the cited references, because the cited references, taken alone or in combination, do not disclose or suggest each and every element of claim 1”. The applicant further argues, on pages 21-22, “Murthy, throughout its entire disclosure, does not disclose the row representative graph at all. At most, Murthy discloses an intensity profile along a horizontal line passing through a region of interest. Therefore, the row intensity profile, i.e., the row profile, depicts intensity value along the single defined row. In contrast, in the claimed invention, a row representative value is extracted from each row and multiple row representative values extracted from the rows in the region of interests form a graph, which is the row representative graph. Therefore, it is respectfully submitted that the intensity profile disclosed Murthy does not correspond to the row representative graph of the claimed invention. Therefore, Murthy also fails to teach “training the training model based on the training image row representative graph.” Other cited references fail to cure the deficiencies of Murthy.”
In response, the office do not find this argument to be persuasive. The office would like to respectfully bring to the applicant’s attention that nowhere does claim 1 recite that row representative values are extracted from each row, nor does it specify multiple row representative values. Claim 1 simply states “extracting row representative values from the RoI in the radiographic image to create a training image row representative graph,” and the limitation is thus taught by Murthy using the breadth of the provided claim language.
Based on the breadth of the claim language, the combination of prior art by Dujmic (US 20200051017 A1), hereinafter referenced as Dujmic in view of Geng (US 20050096515 A1), hereinafter referenced as Geng, further in view of Murthy (US 6055295 A), hereinafter referenced as Murthy teaches the limitations of claim 1 as detailed below.
Regarding claim 1, Dujmic teaches a method of detecting a radiographic object (Fig. 1-2, Paragraph [0028]),
performed by at least one processor (Fig. 6, #602 called a processor, Paragraph [0101]) executing a program stored in a memory (Fig. 6, #606 called memory, Paragraph [0101] – Dujmic discloses computing device 600 also includes processor 602 and associated core 604, and optionally, one or more additional processor(s) 602′ and associated core(s) 604′ (for example, in the case of computer systems having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 606 and other programs for controlling system hardware.).
Dujmic further teaches wherein, prior to receiving the radiographic image (Fig. 1, Paragraph [0029] – Dujmic discloses the computing device is configured to train a machine-learning algorithm(s) with a training data set that includes at least one radiographic image to generate at least one feature vector.),
performing a training operation for training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.),
the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set) comprises:
receiving a training radiographic image including a region of interest (RoI) (Fig. 1, Paragraph [0066]- Dujmic discloses the training data set includes images of containers obtained using a radiographic screening machine or device. The images may depict containers may storing one or more objects.)
that is a region in which a target tube as a radiographic object exists (Fig. 4A illustrates beer cans in a cargo container [wherein the beer cans are the target tube or region of interest]. Paragraph [0087])
and a region out of the RoI (Fig. 4A illustrates a tube-like container that houses the cargo [wherein the container is the untargeted tube that is not a radiographic object, i.e., the region out of the RoI], see paragraph [0087]. Further, in paragraph [0072], Dujmic discloses embodiments may perform manifest verification (MV) to verify that objects listed on the manifest correspond with objects in the containers, to perform empty container verification (ECV) to verify that containers are empty, to determine a cargo type or to perform object type detection, to search for specific objects and/or threats, and/or to perform anomaly detection to search for odd patterns),
Although Dujmic teaches training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.)
Dujmic fails to explicitly teach extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
However, Murthy explicitly teaches extracting row representative values from the RoI in the radiographic image (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue. Further, in Fig. 3A, 3B, Col. 4 Lines [26-29], Murthy discloses an intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B.)
to create a training image row representative graph (Fig. 3A-3C, Col. 4, Lines [21-32]- Murthy discloses even for very low contrast or hazy soft-tissue boundaries, well defined points of negative curvature exist on the line profiles of intensity. This is illustrated in FIG. 3A-3C. FIG. 3A depicts a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B [row representative graph]. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C),
and training the training model (Fig. 2, Col. 5, Lines [51-58] – Murthy discloses in step H of the block diagram of FIG. 2, each pixel in the image is classified as a body part or a non-body part, based on its feature values, using a decision tree. This involves constructing a set of rules which enable body and non-body regions to be determined on the basis of the feature values. The rules are constructed using supervised learning and are therefore, referred to herein as automatically learned classifiers)
based on the training image row representative graph (Fig. 2, Col. 4, Lines [8-12] – Murthy discloses Step C involves detecting soft-tissue boundaries using directional curvatures of intensity profiles. Finding the soft-tissue boundaries accurately is critical because the subsequent steps of global feature extraction and classification rely on this information.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Dujmic in view of Geng having a method of detecting a radiographic object, performed by at least one processor executing a program stored in a memory, the method comprising: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object; extracting feature values of extremal points from the radiographic image to detect feature vectors; and analyzing the feature vectors using a training model to detect a region in the radiographic image where the at least one target tube as a radiographic object exists, wherein, prior to receiving the radiographic image, performing a training operation for training the training model, the training operation comprises: receiving a training radiographic image including a region of interest (RoI) that is a region in which a target tube as a radiographic object exists and a region out of the RoI, with the teachings of Murthy having extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
Wherein having Dujmic’s method of detecting a radiographic object wherein, extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
The motivation behind the modification would have been to obtain a method of detecting a radiographic object that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image]. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
The applicant argues on page 22, “Claim 8 is an independent claim reciting the similar features discussed above with respect to claim 1. Therefore, claim 8 is patentable based on the similar reasons presented above.”
In response, the office does not find this argument to be persuasive based on the same reasons set forth above and the rejection below. Based on the breadth of the claim language, the combination of prior art by Dujmic (US 20200051017 A1), hereinafter referenced as Dujmic in view of Geng (US 20050096515 A1), hereinafter referenced as Geng, further in view of Murthy (US 6055295 A), hereinafter referenced as Murthy teaches the limitations of claim 8 as detailed below.
Regarding claim 8, Dujmic teaches a device for detecting a radiographic object (Fig. 1-2. Paragraph [0028]-Dujmic discloses radiographic images are formed by creating x-rays with an x-ray source that is collimated to form a narrow fan beam, passing cargo through the beam, and detecting x-rays that pass through the cargo. Please also read Paragraph [0091] and Fig. 5.), the device comprising:
a memory storing a program (Fig. 6, #606 called memory, Paragraph [0101]); and at least one processor (Fig. 6, #602 called a processor, Paragraph [0101]) configured to perform, when executing the program: (Fig. 6, Paragraph [0101] – Dujmic discloses computing device 600 also includes processor 602 and associated core 604, and optionally, one or more additional processor(s) 602′ and associated core(s) 604′ (for example, in the case of computer systems having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 606 and other programs for controlling system hardware)
Dujmic further teaches wherein, prior to receiving the radiographic image (Fig. 1, Paragraph [0029] – Dujmic discloses the computing device is configured to train a machine-learning algorithm(s) with a training data set that includes at least one radiographic image to generate at least one feature vector),
the at least one processor (Fig. 6, #602 called a processor, Paragraph [0101]) performs a training operation for training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100),
the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set) comprises:
receiving a training radiographic image including a region of interest (RoI) (Fig. 1, Paragraph [0066]- Dujmic discloses the training data set includes images of containers obtained using a radiographic screening machine or device. The images may depict containers may storing one or more objects.)
that is a region in which a target tube as a radiographic object exists (Fig. 4A illustrates beer cans in a cargo container [wherein the beer cans are the target tube or region of interest]. Paragraph [0087])
and a region out of the RoI (Fig. 4A illustrates a tube-like container that houses the cargo [wherein the container is the untargeted tube that is not a radiographic object, i.e., the region out of the RoI], see paragraph [0087]. Further, in paragraph [0072], Dujmic discloses embodiments may perform manifest verification (MV) to verify that objects listed on the manifest correspond with objects in the containers, to perform empty container verification (ECV) to verify that containers are empty, to determine a cargo type or to perform object type detection, to search for specific objects and/or threats, and/or to perform anomaly detection to search for odd patterns),
Although Dujmic teaches training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.)
Dujmic fails to explicitly teach extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
However, Murthy explicitly teaches extracting row representative values from the RoI in the radiographic image (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue. Further, in Fig. 3A, 3B, Col. 4 Lines [26-29], Murthy discloses an intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B.)
to create a training image row representative graph (Fig. 3A-3C, Col. 4, Lines [21-32]- Murthy discloses even for very low contrast or hazy soft-tissue boundaries, well defined points of negative curvature exist on the line profiles of intensity. This is illustrated in FIG. 3A-3C. FIG. 3A depicts a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B [row representative graph]. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C),
and training the training model (Fig. 2, Col. 5, Lines [51-58] – Murthy discloses in step H of the block diagram of FIG. 2, each pixel in the image is classified as a body part or a non-body part, based on its feature values, using a decision tree. This involves constructing a set of rules which enable body and non-body regions to be determined on the basis of the feature values. The rules are constructed using supervised learning and are therefore, referred to herein as automatically learned classifiers)
based on the training image row representative graph (Fig. 2, Col. 4, Lines [8-12] – Murthy discloses Step C involves detecting soft-tissue boundaries using directional curvatures of intensity profiles. Finding the soft-tissue boundaries accurately is critical because the subsequent steps of global feature extraction and classification rely on this information.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Dujmic in view of Geng having a device for detecting a radiographic object, the device comprising: a memory storing a program; and at least one processor configured to perform, when executing the program: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object; extracting feature values of extremal points from the radiographic image to detect feature vectors; and analyzing the feature vectors using a training model to detect a region in the radiographic image where the at least one target tube as a radiographic object exists, wherein, prior to receiving the radiographic image, performing a training operation for training the training model, the training operation comprises: receiving a training radiographic image including a region of interest (RoI) that is a region in which a target tube as a radiographic object exists and a region out of the RoI, with the teachings of Murthy having extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
Wherein having Dujmic’s device for detecting a radiographic object wherein, extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image]. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
The applicant argues on page 22, “Claims 2-6 and 9-13 are dependent claims depending upon claims 1 or 8. Therefore, these claims are patentable in virtue of their dependencies.” In response, the office does not find this argument to be persuasive based on the same reasons set forth above and the rejection below.
The applicant argues on page 23, “In view of the above, reconsideration and allowance of this application are now believed to be in order, and such actions are hereby solicited.” The application is not allowed at this time based on the reasons set forth above and below.
The office respectfully advises the applicant to amend claims to overcome the prior arts of record.
Claim Rejections – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 and 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Dujmic (US 20200051017 A1), hereinafter referenced as Dujmic in view of Geng (US 20050096515 A1), hereinafter referenced as Geng, further in view of Murthy (US 6055295 A), hereinafter referenced as Murthy.
Regarding claim 1, Dujmic teaches a method of detecting a radiographic object (Fig. 1-2, Paragraph [0028]),
performed by at least one processor (Fig. 6, #602 called a processor, Paragraph [0101]) executing a program stored in a memory (Fig. 6, #606 called memory, Paragraph [0101] – Dujmic discloses computing device 600 also includes processor 602 and associated core 604, and optionally, one or more additional processor(s) 602′ and associated core(s) 604′ (for example, in the case of computer systems having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 606 and other programs for controlling system hardware.), the method comprising:
receiving, by a feature processing module, a radiographic image (Fig. 4A, illustrates images #400 with objects. Paragraph [0087]) obtained by irradiating a region containing a plurality of tubes (Fig. 4A. Paragraph [0087]-Dujmic discloses FIG. 4A illustrates radiographic images 400 of containers containing common object types, including refrigerators, motorcycles, beer, tobacco, and polyethylene terephthalate (PET), and an empty container) including at least one target tube as a radiographic object (Fig. 4A, illustrates beer cans in a truck wherein the beer cans are the target tube. Paragraph [0087) and at least one untargeted tube that is not a radiographic object (Fig. 4A, illustrates a tube-like container that houses the cargo wherein is the untargeted tube that is not a radiographic object. Paragraph [0072]-Dujmic discloses may perform manifest verification (MV) to verify that objects listed on the manifest correspond with objects in the containers, to perform empty container verification (ECV) to verify that containers are empty, to determine a cargo type or to perform object type detection, to search for specific objects and/or threats, and/or to perform anomaly detection to search for odd patterns. Further in paragraph [0087]-Dujmic discloses FIG. 4A illustrates radiographic images 400 of containers containing common object types, including refrigerators, motorcycles, beer, tobacco, and polyethylene terephthalate (PET), and an empty container.);
and analyzing, by a detection module, the feature vectors using a training model to detect a region (Fig. 4, illustrates a cargo container wherein the cargo is detected. Paragraph [0060]-Dujmic discloses Cargo type detection is performed by the image processing system described herein by analyzing radiographic images of one or more objects under inspection. The image processing system analyzes the images using an artificial type neural network to extract a feature vector. Each feature vector is compared against historic distribution feature vectors for the segment and the feature vectors are used to classify each of the one or more objects being scanned.) in the radiographic image where the at least one target tube as a radiographic object exists (Fig. 4-6. Paragraph [0035]).
Dujmic further teaches wherein, prior to receiving the radiographic image (Fig. 1, Paragraph [0029] – Dujmic discloses the computing device is configured to train a machine-learning algorithm(s) with a training data set that includes at least one radiographic image to generate at least one feature vector.),
performing a training operation for training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.),
the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set) comprises:
receiving a training radiographic image including a region of interest (RoI) (Fig. 1, Paragraph [0066]- Dujmic discloses the training data set includes images of containers obtained using a radiographic screening machine or device. The images may depict containers may storing one or more objects.)
that is a region in which a target tube as a radiographic object exists (Fig. 4A illustrates beer cans in a cargo container [wherein the beer cans are the target tube or region of interest]. Paragraph [0087])
and a region out of the RoI (Fig. 4A illustrates a tube-like container that houses the cargo [wherein the container is the untargeted tube that is not a radiographic object, i.e., the region out of the RoI], see paragraph [0087]. Further, in paragraph [0072], Dujmic discloses embodiments may perform manifest verification (MV) to verify that objects listed on the manifest correspond with objects in the containers, to perform empty container verification (ECV) to verify that containers are empty, to determine a cargo type or to perform object type detection, to search for specific objects and/or threats, and/or to perform anomaly detection to search for odd patterns),
Although, Dujmic teaches extracting, by the feature processing module (Fig. 1. #100 called an imaging processing system. Paragraph [0048]-Dujmic discloses the image processing system divides the image into one or more segments that are analyzed using an autoencoder type neural network to extract a feature vector.).
Dujmic fails to explicitly teach extracting, by the feature processing module, feature values of extremal points from the radiographic image to detect feature vectors.
However, Geng explicitly teaches extracting, by the feature processing module, feature values of extremal points (Fig. 2A-C, #f1-g called fiducial points. Paragraph [0040]) from the radiographic image to detect feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic of having a method of detecting a radiographic object, performed by at least one processor executing a program stored in a memory, the method comprising: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object, with the teachings of Geng of having wherein extracting feature values of extremal points from the radiographic image to detect feature vectors.
Wherein having Dujmic’s detecting a radiographic object wherein extracting feature values of extremal points from the radiographic image to detect feature vectors.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances processing speed, automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Geng are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic has a radiographic imaging system that introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Geng has a radiographic imaging system wherein comparing all 3D surface data of a captured 3D image with that of the reference image, the feature vectors are compared to improve the processing speed and allow for the real-time 3D image comparison. Please see Dujmic (US 20200051017 A1), Paragraph [0025] and Geng (US 20050096515 A1), Paragraph [0041].
Although Dujmic in view of Geng teaches training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.)
Dujmic in view of Geng fails to explicitly teach extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
However, Murthy explicitly teaches extracting row representative values from the RoI in the radiographic image (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue. Further, in Fig. 3A, 3B, Col. 4 Lines [26-29], Murthy discloses an intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B.)
to create a training image row representative graph (Fig. 3A-3C, Col. 4, Lines [21-32]- Murthy discloses even for very low contrast or hazy soft-tissue boundaries, well defined points of negative curvature exist on the line profiles of intensity. This is illustrated in FIG. 3A-3C. FIG. 3A depicts a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B [row representative graph]. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C),
and training the training model (Fig. 2, Col. 5, Lines [51-58] – Murthy discloses in step H of the block diagram of FIG. 2, each pixel in the image is classified as a body part or a non-body part, based on its feature values, using a decision tree. This involves constructing a set of rules which enable body and non-body regions to be determined on the basis of the feature values. The rules are constructed using supervised learning and are therefore, referred to herein as automatically learned classifiers)
based on the training image row representative graph (Fig. 2, Col. 4, Lines [8-12] – Murthy discloses Step C involves detecting soft-tissue boundaries using directional curvatures of intensity profiles. Finding the soft-tissue boundaries accurately is critical because the subsequent steps of global feature extraction and classification rely on this information.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Dujmic in view of Geng having a method of detecting a radiographic object, performed by at least one processor executing a program stored in a memory, the method comprising: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object; extracting feature values of extremal points from the radiographic image to detect feature vectors; and analyzing the feature vectors using a training model to detect a region in the radiographic image where the at least one target tube as a radiographic object exists, wherein, prior to receiving the radiographic image, performing a training operation for training the training model, the training operation comprises: receiving a training radiographic image including a region of interest (RoI) that is a region in which a target tube as a radiographic object exists and a region out of the RoI, with the teachings of Murthy having extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
Wherein having Dujmic’s method of detecting a radiographic object wherein, extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
The motivation behind the modification would have been to obtain a method of detecting a radiographic object that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image]. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 2, Dujmic and Geng in view of Murthy teach a method according to claim 1,
Although Dujmic further teaches wherein detecting the feature vectors (Fig. 4, illustrates a cargo container wherein the cargo is detected. Paragraph [0060]) comprises:
Dujmic fails to explicitly teach extracting feature values from the extremal points to generate feature vectors.
However, Geng explicitly teaches extracting feature values (Fig. 2A-C, #f1-g called fiducial points. Paragraph [0040]) from the extremal points to generate feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic in view of Geng of having a method of detecting a radiographic object, the method comprising: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object, with the teachings of Geng of having wherein extracting feature values from the extremal points to generate feature vectors.
Wherein having Dujmic’s detecting a radiographic object wherein extracting feature values from the extremal points to generate feature vectors.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances processing speed, automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Geng are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic has radiographic imaging system that introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Geng has a radiographic imaging system wherein all 3D surface data of a captured 3D image is compared with that of the reference image, and the feature vectors are compared to improve the processing speed and allow for the real-time 3D image comparison. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Geng (US 20050096515 A1), Paragraph [0041].
Although Dujmic in view of Geng teach extracting row representative values (Fig. 2A-2C, Paragraph [0041]).
Dujmic in view of Geng fail to explicitly teach extracting row representative values from the radiographic image to create an image row representative graph; extracting extremal points from the image row representative graph.
However, Murthy explicitly teaches extracting row representative values (Fig. 3A, 3B, Col. 4 Lines [26-29]- Murthy discloses an intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B) from the radiographic image to create an image row representative graph (Fig. 3A-3C, Col. 4, Lines [23-32]- Murthy discloses a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C.);
extracting extremal points (Fig. 2, Step E, Col. 5, Lines [9-15]- Murthy discloses Step E of the method depicted in FIG. 2 involves dividing the image into "regions". This is accomplished by providing the boundaries of significant regions in the image with a one-pixel representation of the cleaned up negative curvatures. The one-pixel representation is obtained by finding the local extrema in horizontal and vertical directions, combining them and performing simple noise removal using conventional connected components analysis techniques.) from the image row representative graph (Col. 4, Lines [17-19]-Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity [row representative graph] in multiple chosen scanning directions.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic in view of Geng of having a method, wherein detecting the feature vectors comprises: extracting row representative values and extracting feature values from the extremal points to generate feature vectors, with the teachings of Murthy having extracting row representative values from the radiographic image to create an image row representative graph and extracting extremal points from the image row representative graph.
Wherein having Dujmic’s detecting the feature vectors comprises extracting row representative values from the radiographic image to create an image row representative graph and extracting extremal points from the image row representative graph.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image].. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 3, Dujmic and Geng in view of Murthy teach a method according to claim 2,
Although Dujmic teaches a method of detecting a radiographic object (Fig. 1-2. Paragraph [0028]).
Dujmic and Geng fail to explicitly teach wherein creating the image row representative graph comprises: extracting representative values for a plurality of pixels belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image and creating the image row representative graph containing the extracted representative values.
However, Murthy explicitly teaches wherein creating the image row representative graph comprises (Fig. 3A-3C, Col. 4, Lines [23-32]): extracting representative values for a plurality of pixels (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue.)
belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image (Col. 5, Lines [43-48]- Murthy discloses the method involves inter-region and intra-region propagation of the features. The features are first computed along scan lines in chosen directions [rows] in the image and then efficiently propagated over entire regions. Well known adaptive smoothing techniques are used herein for feature value propagation)
and creating the image row representative graph (Fig. 3A-3C, Col. 4, Lines [21-32]- Murthy discloses even for very low contrast or hazy soft-tissue boundaries, well defined points of negative curvature exist on the line profiles of intensity. This is illustrated in FIG. 3A-3C. FIG. 3A depicts a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C) containing the extracted representative values (Col. 4, Lines [17-19]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings Dujmic and Geng in view of Murthy of having a method of detecting a radiographic object, with the teachings of Murthy having wherein creating the image row representative graph comprises: extracting representative values for a plurality of pixels belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image, and creating the image row representative containing the extracted representative values.
Wherein having Dujmic’s detecting a radiographic object wherein creating the image row representative graph comprises: extracting representative values for a plurality of pixels belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image and creating the image row representative graph containing the extracted representative values.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image].. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 4, Dujmic and Geng in view of Murthy teach a method according to claim 2,
Although Dujmic teaches a method of detecting a radiographic object (Fig. 1-2. Paragraph [0028]).
Dujmic and Geng fail to explicitly teach wherein the feature values include a feature of any one extremal point from among the extremal points, features of neighboring extremal points of the one extremal point, and a feature of a relationship between the one extremal point and the neighboring extremal points.
However, Murthy explicitly teaches wherein the feature values include a feature of any one extremal point (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue) from among the extremal points (Fig. 2, Col. 5 Lines [9-19]- Murthy discloses step E of the method depicted in FIG. 2 involves dividing the image into "regions". This is accomplished by providing the boundaries of significant regions in the image with a one-pixel representation of the cleaned up negative curvatures. The one-pixel representation is obtained by finding the local extrema in horizontal and vertical directions, combining them and performing simple noise removal using conventional connected components analysis techniques. The region information is for subsequently extracting global feature values which in turn, are used for classification as explained further on in greater detail),
features of neighboring extremal points of the one extremal point (Fig. 2, Col. 5 Lines [43-46]- Murthy discloses Step G of the method involves inter-region and intra-region propagation of the features. The features are first computed along scan lines in chosen directions in the image and then efficiently propagated over entire regions),
and a feature of a relationship between the one extremal point (Fig. 2, Col. 5, Lines [20-30]-Murthy discloses appropriate features such as range of intensity values, size, etc., are computed along horizontal and vertical lines within each region created in step E [of figure 2]. The method of the invention preferably uses three features for segmentation. These features are homogeneity, representative intensity, and station number. Homogeneity is the minimum amount of intensity variation along chosen directions, per pixel, inside a region. Representative intensity is the median intensity in a region. Station number is the number of the current station with respect to the full-leg study. Station numbers start at 0 at the pelvic region) and the neighboring extremal points (Fig. 2, Col. 5, Lines [33-36]-Murthy discloses additional features such as, the size of a region, the location of a region with respect to the station and with respect to the full-leg study, the variance of intensities in the region, etc., have been tried).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic and Geng in view of Murthy of having a method of detecting a radiographic object, with the teachings of Murthy having wherein the feature values include a feature of any one extremal point from among the extremal points, features of neighboring extremal points of the one extremal point, and a feature of a relationship between the one extremal point and the neighboring extremal points.
Wherein having Dujmic’s detecting a radiographic object wherein having the feature values include a feature of any one extremal point from among the extremal points, features of neighboring extremal points of the one extremal point, and a feature of a relationship between the one extremal point and the neighboring extremal points.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image].. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 5, Dujmic and Geng in view of Murthy teach the method according to claim 1,
Dujmic further teaches wherein the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set) further comprises:
generating training feature vectors (Fig. 1. Paragraph [0029] - Dujmic discloses in exemplary embodiments, the image processing system includes a computing device equipped with a processor in communication with a scanner, such as but not limited to an x-ray radiography scanner, configured to render radiographic images. As discussed further below, the computing device is configured to train a machine-learning algorithm(s) with a training data set that includes at least one radiographic image to generate at least one feature vector. The radiographic image includes at least one image of a test container storing objects),
including the target group of feature vectors (Fig. 4. Paragraph [0069] - Dujmic discloses in some embodiments, the machine-learning algorithm(s) is based on an autoencoder neural network and uses radiographic images of containers for the training data. The training minimizes the difference between the input image vector and the reconstructed image in each training sample... In the second part of the training, the feature vector of containers with the same object type [target group of feature vectors] is averaged to obtain the calibration data. Object types that are averaged may include, for example, containers with beer cases, refrigerators, or motorcycles)
and the control group of feature vectors (Fig. 14, Paragraph [0114] – Dujmic discloses an exemplary method employed by the image processing system to confirm that designated empty shipping containers are in fact empty… At step 1402, a computing device receives a rendered radiographic image of a shipping container. At step 1404, a feature vector is extracted from the image using an autoencoder neural network. At step 1406, the image processing system assigns a statistical probability of the emptiness of the scanned shipping container based on comparison of the extracted feature vector against a historic distribution of feature vectors [control group of feature vectors] extracted from radiographic images associated with prior empty shipping containers);
and training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.)
Although Dujmic explicitly teaches to perform a classification operation classifying the training feature vectors into the target group of feature vectors (Fig. 1. Paragraph [0031] - Dujmic discloses the trained machine-learning algorithm(s) generates one or more feature vectors based on the training data set. Feature vectors are used to represent numeric or symbolic characteristics, called features, of an object in a mathematical way, and are used for pattern processing. The training data set is used to train the image processing system to identify patterns associated with the object type. The generated patterns are stored, and used to analyze feature vectors of one or more input images. The input images are images of objects obtained via a radiographic machine. Based on the feature vectors, in one embodiment, the image processing system is capable of identifying anomalies) and the control group of feature vectors (Fig. 1. Paragraph [0060], Dujmic discloses cargo type detection is performed by the image processing system described herein by analyzing radiographic images of one or more objects under inspection. The image processing system analyzes the images using an artificial type neural network to extract a feature vector. Each feature vector is compared against historic distribution feature vectors for the segment and the feature vectors are used to classify each of the one or more objects being scanned).
Dujmic fails to explicitly teach extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors; extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors.
However, Geng explicitly teaches extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration),
extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic and Geng of having a method of detecting a radiographic object, the method comprising: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object, with the teachings of Geng of having the training operation further comprises: extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors; extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors.
Wherein having Dujmic’s receiving a training radiographic image, the training operation further comprises: extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors; extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances processing speed, automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Geng are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic has a radiographic imaging system that introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Geng has a radiographic imaging system wherein comparing all 3D surface data of a captured 3D image with that of the reference image, the feature vectors are compared to improve the processing speed and allow for the real-time 3D image comparison. Please see Dujmic (US 20200051017 A1), Paragraph [0025] and Geng (US 20050096515 A1), Paragraph [0041].
Although Dujmic in view of Geng teach the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set).
Dujmic in view of Geng fail to explicitly teach extracting training extremal points from the training image row representative graph.
However, Murthy explicitly teaches extracting training extremal points (Fig. 2, Step E, Col. 5, Lines [9-15]- Murthy discloses Step E of the method depicted in FIG. 2 involves dividing the image into "regions". This is accomplished by providing the boundaries of significant regions in the image with a one-pixel representation of the cleaned up negative curvatures. The one-pixel representation is obtained by finding the local extrema in horizontal and vertical directions, combining them and performing simple noise removal using conventional connected components analysis techniques)
from the training image row representative graph (Col. 4, Lines [17-19]-Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity [row representative graph] in multiple chosen scanning directions).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic in view of Geng having a method of detecting a radiographic object, the method comprising: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object, with the teachings of Murthy having the training operation further comprises: extracting training extremal points from the training image row representative graph.
Wherein having Dujmic’s receiving a training radiographic image, wherein the training operation further comprises: extracting training extremal points from the training image row representative graph.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image].. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 8, Dujmic teaches a device for detecting a radiographic object (Fig. 1-2. Paragraph [0028]-Dujmic discloses radiographic images are formed by creating x-rays with an x-ray source that is collimated to form a narrow fan beam, passing cargo through the beam, and detecting x-rays that pass through the cargo. Please also read Paragraph [0091] and Fig. 5.), the device comprising:
a memory storing a program (Fig. 6, #606 called memory, Paragraph [0101]); and at least one processor (Fig. 6, #602 called a processor, Paragraph [0101]) configured to perform, when executing the program: (Fig. 6, Paragraph [0101] – Dujmic discloses computing device 600 also includes processor 602 and associated core 604, and optionally, one or more additional processor(s) 602′ and associated core(s) 604′ (for example, in the case of computer systems having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 606 and other programs for controlling system hardware)
a feature processing module configured to receive receiving a radiographic image (Fig. 4A, illustrates images #400 with objects, Paragraph [0048] - Dujmic discloses the image processing system receives a rendered radiographic image that includes a container that is declared empty). See also Paragraph [0087])
obtained by irradiating a region containing a plurality of tubes (Fig. 4A. Paragraph [0087]-Dujmic discloses FIG. 4A illustrates radiographic images 400 of containers containing common object types, including refrigerators, motorcycles, beer, tobacco, and polyethylene terephthalate (PET), and an empty container) including at least one target tube as a radiographic object (Fig. 4A, illustrates beer cans in a truck wherein the beer cans are the target tube. Paragraph [0087]) and at least one untargeted tube that is not a radiographic object (Fig. 4A, illustrates a tube-like container that houses the cargo wherein is the untargeted tube that is not a radiographic object. Paragraph [0072]-Dujmic discloses may perform manifest verification (MV) to verify that objects listed on the manifest correspond with objects in the containers, to perform empty container verification (ECV) to verify that containers are empty, to determine a cargo type or to perform object type detection, to search for specific objects and/or threats, and/or to perform anomaly detection to search for odd patterns. Further in paragraph [0087]-Dujmic discloses FIG. 4A illustrates radiographic images 400 of containers containing common object types, including refrigerators, motorcycles, beer, tobacco, and polyethylene terephthalate (PET), and an empty container);
and a detection module configured to analyze analyzing the feature vectors (Paragraph [0060]-Dujmic discloses Cargo type detection is performed by the image processing system described herein by analyzing radiographic images of one or more objects under inspection. The image processing system analyzes the images using an artificial type neural network to extract a feature vector. Each feature vector is compared against historic distribution feature vectors for the segment and the feature vectors are used to classify each of the one or more objects being scanned) using a training model to detect a region (Fig. 4, illustrates a cargo container wherein the cargo is detected) in the radiographic image where the at least one target tube as a radiographic object exists (Fig. 4-6. Paragraph [0035]),
Dujmic further teaches wherein, prior to receiving the radiographic image (Fig. 1, Paragraph [0029] – Dujmic discloses the computing device is configured to train a machine-learning algorithm(s) with a training data set that includes at least one radiographic image to generate at least one feature vector),
the at least one processor (Fig. 6, #602 called a processor, Paragraph [0101]) performs a training operation for training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100),
the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set) comprises:
receiving a training radiographic image including a region of interest (RoI) (Fig. 1, Paragraph [0066]- Dujmic discloses the training data set includes images of containers obtained using a radiographic screening machine or device. The images may depict containers may storing one or more objects.)
that is a region in which a target tube as a radiographic object exists (Fig. 4A illustrates beer cans in a cargo container [wherein the beer cans are the target tube or region of interest]. Paragraph [0087])
and a region out of the RoI (Fig. 4A illustrates a tube-like container that houses the cargo [wherein the container is the untargeted tube that is not a radiographic object, i.e., the region out of the RoI], see paragraph [0087]. Further, in paragraph [0072], Dujmic discloses embodiments may perform manifest verification (MV) to verify that objects listed on the manifest correspond with objects in the containers, to perform empty container verification (ECV) to verify that containers are empty, to determine a cargo type or to perform object type detection, to search for specific objects and/or threats, and/or to perform anomaly detection to search for odd patterns),
Although Dujmic teaches extracting vectors (Fig. 1. #100 called an imaging processing system; Paragraph [0048] - Dujmic discloses the image processing system divides the image into one or more segments that are analyzed using an autoencoder type neural network to extract a feature vector).
Dujmic fails to explicitly teach extracting feature values of extremal points from the radiographic image to detect feature vectors.
However, Geng explicitly teaches extracting feature values of extremal points (Fig. 2A-C, #f1-g called fiducial points. Paragraph [0040]) from the radiographic image to detect feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic of having a device for detecting a radiographic object, the device comprising: a memory storing a program; and at least one processor configured to perform, when executing the program: receiving a radiographic by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object, with the teachings of Geng of having wherein extracting feature values of extremal points from the radiographic image to detect feature vectors.
Wherein having Dujmic’s detecting a radiographic object wherein extracting feature values of extremal points from the radiographic image to detect feature vectors.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances processing speed, automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Geng are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Geng’s radiographic imaging system compares all 3D surface data of a captured 3D image with that of the reference image, the feature vectors are compared to improve the processing speed and allow for the real-time 3D image comparison. Please see Dujmic (US 20200051017 A1), Paragraph [0025] and Geng (US 20050096515 A1), Paragraph [0041].
Although Dujmic in view of Geng teach training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.)
Dujmic in view of Geng fail to explicitly teach extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
However, Murthy explicitly teaches extracting row representative values from the RoI in the radiographic image (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue. Further, in Fig. 3A, 3B, Col. 4 Lines [26-29], Murthy discloses an intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B.)
to create a training image row representative graph (Fig. 3A-3C, Col. 4, Lines [21-32]- Murthy discloses even for very low contrast or hazy soft-tissue boundaries, well defined points of negative curvature exist on the line profiles of intensity. This is illustrated in FIG. 3A-3C. FIG. 3A depicts a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B [row representative graph]. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C),
and training the training model (Fig. 2, Col. 5, Lines [51-58] – Murthy discloses in step H of the block diagram of FIG. 2, each pixel in the image is classified as a body part or a non-body part, based on its feature values, using a decision tree. This involves constructing a set of rules which enable body and non-body regions to be determined on the basis of the feature values. The rules are constructed using supervised learning and are therefore, referred to herein as automatically learned classifiers)
based on the training image row representative graph (Fig. 2, Col. 4, Lines [8-12] – Murthy discloses Step C involves detecting soft-tissue boundaries using directional curvatures of intensity profiles. Finding the soft-tissue boundaries accurately is critical because the subsequent steps of global feature extraction and classification rely on this information.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Dujmic in view of Geng having a device for detecting a radiographic object, the device comprising: a memory storing a program; and at least one processor configured to perform, when executing the program: receiving a radiographic image obtained by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object; extracting feature values of extremal points from the radiographic image to detect feature vectors; and analyzing the feature vectors using a training model to detect a region in the radiographic image where the at least one target tube as a radiographic object exists, wherein, prior to receiving the radiographic image, performing a training operation for training the training model, the training operation comprises: receiving a training radiographic image including a region of interest (RoI) that is a region in which a target tube as a radiographic object exists and a region out of the RoI, with the teachings of Murthy having extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
Wherein having Dujmic’s device for detecting a radiographic object wherein, extracting row representative values from the RoI in the radiographic image to create a training image row representative graph, and training the training model based on the training image row representative graph.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image]. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 9, Dujmic and Geng in view of Murthy teach a device according to claim 8,
Although Dujmic further teaches wherein detecting the feature vectors (Fig. 4, illustrates a cargo container wherein the cargo is detected. Paragraph [0060]) comprises:
Dujmic fails to explicitly teach extracting feature values from the extremal points to generate feature vectors.
However, Geng explicitly teaches extracting feature values (Fig. 2A-C, #f1-g called fiducial points. Paragraph [0040]) from the extremal points to generate feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic in view of Geng of having a device for detecting a radiographic object, the device comprising: a memory storing a program; and at least one processor configured to perform, when executing the program: receiving a radiographic by irradiating a region containing a plurality of tubes including at least one target tube as a radiographic object and at least one untargeted tube that is not a radiographic object, with the teachings of Geng of having wherein extracting feature values from the extremal points to generate feature vectors.
Wherein having Dujmic`s detecting a radiographic object wherein extracting feature values from the extremal points to generate feature vectors.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances processing speed, automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Geng are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic has a radiographic imaging system that introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Geng has a radiographic imaging system wherein comparing all 3D surface data of a captured 3D image with that of the reference image, the feature vectors are compared to improve the processing speed and allow for the real-time 3D image comparison. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Geng (US 20050096515 A1), Paragraph [0041].
Although Dujmic in view of Geng teach extracting row representative values (Fig. 2A-2C, Paragraph [0041]).
Dujmic in view of Geng fail to explicitly teach extracting row representative values from the radiographic image to create an image row representative graph; extracting extremal points from the image row representative graph.
However, Murthy explicitly teaches extracting row representative values (Fig. 3A, 3B, Col. 4 Lines [26-29]- Murthy discloses an intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B.) from the radiographic image to create an image row representative graph (Fig. 3A-3C, Col. 4, Lines [23-32]- Murthy discloses a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C.);
extracting extremal points (Fig. 2, Step E, Col. 5, Lines [9-15]- Murthy discloses Step E of the method depicted in FIG. 2 involves dividing the image into "regions". This is accomplished by providing the boundaries of significant regions in the image with a one-pixel representation of the cleaned up negative curvatures. The one-pixel representation is obtained by finding the local extrema in horizontal and vertical directions, combining them and performing simple noise removal using conventional connected components analysis techniques.) from the image row representative graph (Col. 4, Lines [17-19]-Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity [row representative graph] in multiple chosen scanning directions.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Dujmic in view of Geng having a device for detecting a radiographic object wherein detecting the feature vectors comprises: extracting feature values from the extremal points to generate feature vectors, with the teachings of Murthy having extracting row representative values from the radiographic image to create an image row representative graph; and extracting extremal points from the image row representative graph.
Wherein having Dujmic’s detecting a radiographic object wherein detecting the feature vectors comprises: extracting row representative values from the radiographic image to create an image row representative graph; and extracting extremal points from the image row representative graph.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging system that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image]. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 10, Dujmic and Geng in view of Murthy teach a device according to claim 9,
Although Dujmic teaches a device for detecting a radiographic object (Fig. 1-2. Paragraph [0028]-Dujmic discloses radiographic images are formed by creating x-rays with an x-ray source that is collimated to form a narrow fan beam, passing cargo through the beam, and detecting x-rays that pass through the cargo. Please also read Paragraph [0091] and Fig. 5.).
Dujmic and Geng fail to explicitly teach wherein creating the image row representative graph comprises: extracting representative values for a plurality of pixels belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image, and creating the image row representative graph containing the extracted representative values.
However, Murthy explicitly teaches wherein creating the image row representative graph (Fig. 3A-3C, Col. 4, Lines [23-32]) comprises:
extracting representative values for a plurality of pixels (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue.) belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image (Col. 5, Lines [43-48]- Murthy discloses the method involves inter-region and intra-region propagation of the features. The features are first computed along scan lines in chosen directions [rows] in the image and then efficiently propagated over entire regions. Well known adaptive smoothing techniques are used herein for feature value propagation),
and creating the image row representative graph (Fig. 3A-3C, Col. 4, Lines [21-32]- Murthy discloses even for very low contrast or hazy soft-tissue boundaries, well defined points of negative curvature exist on the line profiles of intensity. This is illustrated in FIG. 3A-3C. FIG. 3A depicts a peripheral x-ray image with low-contrast soft tissue boundaries near the ankle. An intensity profile along a horizontal line passing through the region of interest in FIG. 3A has points of negative curvature corresponding to these soft-tissue boundaries as shown in FIG, 3B. From a histogram equalized image of the line profile curvatures, very low-contrast boundaries are clearly preserved in the negative curvature image of FIG. 3C) containing the extracted representative values (Col. 4, Lines [17-19]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings Dujmic and Geng in view of Murthy of having a device for detecting a radiographic object, with the teachings of Murthy having wherein creating the image row representative graph comprises: extracting representative values for a plurality of pixels belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image, and creating the image row representative containing the extracted representative values.
Wherein having Dujmic’s detecting a radiographic object wherein creating the image row representative graph comprises: extracting representative values for a plurality of pixels belonging to the same row through statistical analysis of the plurality of pixels in the radiographic image, and creating the image row representative containing the extracted representative values.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image].. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 11, Dujmic and Geng in view of Murthy teach the device according to claim 9,
Although Dujmic further teaches a device for detecting a radiographic object (Fig. 1-2. Paragraph [0028]-Dujmic discloses radiographic images are formed by creating x-rays with an x-ray source that is collimated to form a narrow fan beam, passing cargo through the beam, and detecting x-rays that pass through the cargo. Please also read Paragraph [0091] and Fig. 5.).
Dujmic and Geng fail to explicitly teach wherein the feature values include a feature of any one extremal point (Col. 4, Lines [16-21]- Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity in multiple chosen scanning directions. These provide a reliable indicator of low-contrast boundaries, such as soft-tissue) from among the extremal points (Fig. 2, Col. 5 Lines [9-19]- Murthy discloses step E of the method depicted in FIG. 2 involves dividing the image into "regions". This is accomplished by providing the boundaries of significant regions in the image with a one-pixel representation of the cleaned up negative curvatures. The one-pixel representation is obtained by finding the local extrema in horizontal and vertical directions, combining them and performing simple noise removal using conventional connected components analysis techniques. The region information is for subsequently extracting global feature values which in turn, are used for classification as explained further on in greater detail),
features of neighboring extremal points of the one extremal point (Fig. 2, Col. 5 Lines [43-46]- Murthy discloses Step G of the method involves inter-region and intra-region propagation of the features. The features are first computed along scan lines in chosen directions in the image and then efficiently propagated over entire regions.),
and a feature of a relationship between the one extremal point (Fig. 2, Col. 5, Lines [20-30]-Murthy discloses appropriate features such as range of intensity values, size, etc., are computed along horizontal and vertical lines within each region created in step E [of figure 2]. The method of the invention preferably uses three features for segmentation. These features are homogeneity, representative intensity, and station number. Homogeneity is the minimum amount of intensity variation along chosen directions, per pixel, inside a region. Representative intensity is the median intensity in a region. Station number is the number of the current station with respect to the full-leg study. Station numbers start at 0 at the pelvic region) and the neighboring extremal points (Fig. 2, Col. 5, Lines [33-36]-Murthy discloses additional features such as, the size of a region, the location of a region with respect to the station and with respect to the full-leg study, the variance of intensities in the region, etc., have been tried).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic and Geng in view of Murthy of having a device for detecting a radiographic object, with the teachings of Murthy having wherein the feature values include a feature of any one extremal point from among the extremal points, features of neighboring extremal points of the one extremal point, and a feature of a relationship between the one extremal point and the neighboring extremal points.
Wherein having Dujmic’s detecting a radiographic object wherein having the feature values include a feature of any one extremal point from among the extremal points, features of neighboring extremal points of the one extremal point, and a feature of a relationship between the one extremal point and the neighboring extremal points.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image].. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Regarding claim 12, Dujmic and Geng in view of Murthy teach the device according to claim 8,
Dujmic further teaches wherein the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set) further comprises:
generate training feature vectors (Fig. 1. Paragraph [0029] - Dujmic discloses in exemplary embodiments, the image processing system includes a computing device equipped with a processor in communication with a scanner, such as but not limited to an x-ray radiography scanner, configured to render radiographic images. As discussed further below, the computing device is configured to train a machine-learning algorithm(s) with a training data set that includes at least one radiographic image to generate at least one feature vector. The radiographic image includes at least one image of a test container storing objects)
including the target group of feature vectors (Fig. 4. Paragraph [0069] - Dujmic discloses in some embodiments, the machine-learning algorithm(s) is based on an autoencoder neural network and uses radiographic images of containers for the training data. The training minimizes the difference between the input image vector and the reconstructed image in each training sample... In the second part of the training, the feature vector of containers with the same object type [target group of feature vectors] is averaged to obtain the calibration data. Object types that are averaged may include, for example, containers with beer cases, refrigerators, or motorcycles)
and the control group of feature vectors (Fig. 14, Paragraph [0114] – Dujmic discloses an exemplary method employed by the image processing system to confirm that designated empty shipping containers are in fact empty… At step 1402, a computing device receives a rendered radiographic image of a shipping container. At step 1404, a feature vector is extracted from the image using an autoencoder neural network. At step 1406, the image processing system assigns a statistical probability of the emptiness of the scanned shipping container based on comparison of the extracted feature vector against a historic distribution of feature vectors [control group of feature vectors] extracted from radiographic images associated with prior empty shipping containers);
and training the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.)
Although Dujmic explicitly teaches to perform a classification operation classifying the training feature vectors into the target group of feature vectors (Fig. 1. Paragraph [0031] - Dujmic discloses the trained machine-learning algorithm(s) generates one or more feature vectors based on the training data set. Feature vectors are used to represent numeric or symbolic characteristics, called features, of an object in a mathematical way, and are used for pattern processing. The training data set is used to train the image processing system to identify patterns associated with the object type. The generated patterns are stored, and used to analyze feature vectors of one or more input images. The input images are images of objects obtained via a radiographic machine. Based on the feature vectors, in one embodiment, the image processing system is capable of identifying anomalies) and the control group of feature vectors (Fig. 1. Paragraph [0060], Dujmic discloses cargo type detection is performed by the image processing system described herein by analyzing radiographic images of one or more objects under inspection. The image processing system analyzes the images using an artificial type neural network to extract a feature vector. Each feature vector is compared against historic distribution feature vectors for the segment and the feature vectors are used to classify each of the one or more objects being scanned).
Dujmic fails to explicitly teach extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors; extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors.
However, Geng explicitly teaches extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration);
extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors (Fig. 2A-C. Paragraph [0041]-Geng discloses around each of these fiducial points, we will extract the local surface characteristics ([x, y, z] coordinate value, surface curvatures, surface normal vector, etc) using a 3D data set of the neighboring points, as shown in FIG. 2B. The collection of the local features of all fiducial points forms a "feature vector" of this particular surface in this configuration).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic and Geng of having a device for detecting a radiographic object, with the teachings of Geng of having the training operation further comprises: extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors; extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors.
Wherein having Dujmic’s receiving a training radiographic image, the training operation further comprises: extracting feature values of the extremal points in the RoI among the extracted extremal points to generate a target group of feature vectors; extracting feature values of the extremal points in the region out of the RoI to generate a control group of feature vectors.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances processing speed, automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Geng are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic has a radiographic imaging system that introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Geng has a radiographic imaging system wherein comparing all 3D surface data of a captured 3D image with that of the reference image, the feature vectors are compared to improve the processing speed and allow for the real-time 3D image comparison. Please see Dujmic (US 20200051017 A1), Paragraph [0025] and Geng (US 20050096515 A1), Paragraph [0041].
Although Dujmic in view of Geng teach the training operation (Fig. 2, Paragraph [0074] – Dujmic discloses block 202, the machine-learning algorithm(s) employed in the image processing system 100 is trained to analyze a specific object type within a container using a training data set).
Dujmic in view of Geng fail to explicitly teach extracting training extremal points from the training image row representative graph.
However, Murthy explicitly teaches extracting training extremal points (Fig. 2, Step E, Col. 5, Lines [9-15]- Murthy discloses Step E of the method depicted in FIG. 2 involves dividing the image into "regions". This is accomplished by providing the boundaries of significant regions in the image with a one-pixel representation of the cleaned up negative curvatures. The one-pixel representation is obtained by finding the local extrema in horizontal and vertical directions, combining them and performing simple noise removal using conventional connected components analysis techniques)
from the training image row representative graph (Col. 4, Lines [17-19]-Murthy discloses soft-tissue boundaries are detected in the present invention by determining negative curvature points along line profiles of intensity [row representative graph] in multiple chosen scanning directions).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic in view of Geng having a device for detecting a radiographic object, with the teachings of Murthy having the training operation further comprises: extracting training extremal points from the training image row representative graph.
Wherein having Dujmic’s receiving a training radiographic image, wherein the training operation further comprises: extracting training extremal points from the training image row representative graph.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation robustly and efficiently, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Murthy are radiographic imaging systems that use feature vectors to obtain images, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Murthy has a radiographic imaging system wherein the method performed by the automatic collimation apparatus 38 of the present invention successfully overcomes the difficulties [of locating the body in an x-ray fluoroscopy image].. When the method is implemented as software, it operates robustly and efficiently on noisy, low contrast, possibly pre-collimated x-ray fluoroscopy images. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Murthy (US 6055295 A), Col. 3 Lines [45-47].
Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Dujmic (US 20200051017 A1), hereinafter referenced as Dujmic in view of Geng (US 20050096515 A1), hereinafter referenced as Geng, further in view of Murthy (US 6055295 A), hereinafter referenced as Murthy further in view of Gates (US 20070203861 A1), hereinafter referenced as Gates.
Regarding claim 6, Dujmic and Geng in view of Murthy teach a method according to claim 5,
Although Dujmic teaches wherein the training of the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.) comprises:
Dujmic and Geng in view of Murthy fail to explicitly teach performing an optimization to modify a weighted vector of a decision boundary separating the target group of feature vectors and the control group of feature vectors so that a margin representing the distance from the decision boundary to each of the support vectors of the target group of feature vectors and of the control group of feature vectors is maximized through an objective function of the training model.
However, Gates explicitly teaches wherein the training of the training model comprises: performing an optimization to modify a weighted vector of a decision boundary (Fig. 2, Paragraphs [0023-0025] - Gates discloses according to a first aspect of the present invention there is provided a method for operating a computational device as a support vector machine in order to define a decision surface separating two opposing classes of a training set of vectors, the method including the steps of: associating a distance parameter with each vector of the training set, the distance parameter indicating a distance from its associated vector to the opposite class; and determining a linearly independent set of support vectors from the training set such that the sum of the distances associated with the linearly independent support vectors is minimised)
separating the target group of feature vectors (Fig. 5, Paragraphs [0031-0032]- Gates discloses according to a further aspect of the present invention there is provided a computer software product comprising a computer readable medium for execution by one or more processors of a computer system, the software product including: instructions to define a decision surface separating two opposing classes [target group vs. control group] of a training set of vectors) and the control group of feature vectors (Fig. 5, Paragraphs [0031-0032])
so that a margin representing the distance from the decision boundary to each of the support vectors of the target group of feature vectors and of the control group of feature vectors (Fig. 2, Paragraph [0056]- Gates discloses the hyperplane with this property is the one that leaves the maximum margin between the two classes, where the margin is defined as the sum of the distances of the hyperplane from the closest points of the two classes. The support vector machine works on finding the maximum margin separating the hyperplane between two subject groups [target group vs. control group] through the minimization of a given quadratic programming problem)
is maximized through an objective function of the training model (Paragraph [0056] – Gates discloses the support vector machine works on finding the maximum margin separating the hyperplane between two subject groups through the minimization of a given quadratic programming problem. The present inventor has realized that given that it is desirable to find the maximum margin, and that we can calculate the distance between any two points in the test set, the optimal vectors to preselect as potential support vectors are those closest to the decision hyperplane. The vectors closest will be the ones with the minimum distance to the opposing class).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic and Geng in view of Murthy having a method of detecting a radiographic object by the training of the training model, with the teachings of Gates having, wherein the training of the training model comprises: performing an optimization to modify a weighted vector of a decision boundary separating the target group of feature vectors and the control group of feature vectors so that a margin representing the distance from the decision boundary to each of the support vectors of the target group of feature vectors and of the control group of feature vectors is maximized through an objective function of the training model.
Wherein having Dujmic’s detecting a radiographic object wherein the training of the training model comprises performing an optimization to modify a weighted vector of a decision boundary separating the target group of feature vectors and the control group of feature vectors so that a margin representing the distance from the decision boundary to each of the support vectors of the target group of feature vectors and of the control group of feature vectors is maximized through an objective function of the training model.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation accuracy and efficiency, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Gates are systems that processes vectors, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Gates has a support vector machine works on finding the maximum margin separating the hyperplane between two subject groups through the minimization of a given quadratic programming problem. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Gates (US 20070203861 A1), Paragraph [0056].
Regarding claim 13, Dujmic and Geng in view of Murthy teach a device according to claim 12,
Although Dujmic teaches wherein the training of the training model (Fig. 1, #120 called training data set module, Paragraph [0066] – Dujmic discloses the training data set module 120 may be a software and/or hardware-implemented module configured to manage and store a training data set for the machine-learning algorithm(s) employed by the image processing system 100.) comprises:
Dujmic and Geng in view of Murthy fail to explicitly teach performing an optimization to modify a weighted vector of a decision boundary separating the target group of feature vectors and the control group of feature vectors so that a margin representing the distance from the decision boundary to each of support vectors of the target group of feature vectors and of the control group of feature vectors is maximized through an objective function of the training model.
However, Gates explicitly teaches performing an optimization to modify a weighted vector of a decision boundary (Fig. 2, Paragraphs [0040-0042]- Gates discloses a computational device configured to define a decision surface separating two opposing classes of a training set of vectors, the computational device including one or more processors arranged to: associate a distance parameter with each vector of the training set, the distance parameter indicating a distance from its associated vector to the opposite class; and determine a linearly independent set of support vectors from the training set such that the sum of the distances associated with the linearly independent support vectors is minimised)
separating the target group of feature vectors (Fig. 5, Paragraphs [0031-0032]- Gates discloses according to a further aspect of the present invention there is provided a computer software product comprising a computer readable medium for execution by one or more processors of a computer system, the software product including: instructions to define a decision surface separating two opposing classes [target group vs. control group] of a training set of vectors) and the control group of feature vectors (Fig. 5, Paragraphs [0031-0032])
so that a margin representing the distance from the decision boundary to each of support vectors of the target group of feature vectors and of the control group of feature vectors (Fig. 2, Paragraph [0056]- Gates discloses the hyperplane with this property is the one that leaves the maximum margin between the two classes, where the margin is defined as the sum of the distances of the hyperplane from the closest points of the two classes. The support vector machine works on finding the maximum margin separating the hyperplane between two subject groups [target group vs. control group] through the minimization of a given quadratic programming problem)
is maximized through an objective function of the training model (Paragraph [0056] – Gates discloses the support vector machine works on finding the maximum margin separating the hyperplane between two subject groups through the minimization of a given quadratic programming problem. The present inventor has realized that given that it is desirable to find the maximum margin, and that we can calculate the distance between any two points in the test set, the optimal vectors to preselect as potential support vectors are those closest to the decision hyperplane. The vectors closest will be the ones with the minimum distance to the opposing class).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Dujmic in view of Geng having a device for detecting a radiographic object by the training of the training model, with the teachings of Gates having performing an optimization to modify a weighted vector of a decision boundary separating the target group of feature vectors and the control group of feature vectors so that a margin representing the distance from the decision boundary to each of support vectors of the target group of feature vectors and of the control group of feature vectors is maximized through an objective function of the training model.
Wherein having Dujmic’s detecting a radiographic object wherein the training of the training model comprises performing an optimization to modify a weighted vector of a decision boundary separating the target group of feature vectors and the control group of feature vectors so that a margin representing the distance from the decision boundary to each of support vectors of the target group of feature vectors and of the control group of feature vectors is maximized through an objective function of the training model.
The motivation behind the modification would have been to obtain a radiographic imaging system that enhances the operation accuracy and efficiency, along with automated detection and the use of machine-learning algorithm(s) to achieve 100% inspection of all cargo, since both Dujmic and Gates are systems that processes vectors, wherein Dujmic’s radiographic imaging system introduces machine-learning algorithm(s) for cargo inspection that can replace or aid operators in order to be able to achieve 100% inspection of all cargo, while Gates has a support vector machine works on finding the maximum margin separating the hyperplane between two subject groups through the minimization of a given quadratic programming problem. Please see Dujmic (US 20200051017 A1), Paragraph [0025], and Gates (US 20070203861 A1), Paragraph [0056].
Allowable Subject Matter
Claims 7, and 14, along with their dependent claims, are therefrom objected to as being dependent upon rejected base claims, claim 1, and 8, respectively but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims, once the specification, drawing and claim objections along with the 112 (a) and 112 (b) rejections are overcome.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 7, the prior arts fail to explicitly teach, wherein the training of the training model comprises the optimization that the margin is maximized orthogonally to the weighted vector of the decision boundary according to the following equation
PNG
media_image1.png
66
157
media_image1.png
Greyscale
where: w represents the weighted vector,
PNG
media_image2.png
26
12
media_image2.png
Greyscale
is the tolerance, R is the regularization parameter, i is the index of the training feature vector, and N is the number of training feature vectors, as claimed in claim 7.
Regarding claim 14, the prior arts fail to explicitly teach, wherein the training of the training model comprises the optimization that the margin is maximized orthogonally to the weighted vector of the decision boundary according to the following equation
PNG
media_image1.png
66
157
media_image1.png
Greyscale
where: w represents the weighted vector,
PNG
media_image2.png
26
12
media_image2.png
Greyscale
is the tolerance, R is the regularization parameter, i is the index of the training feature vector, and N is the number of training feature vectors, as claimed in claim 14.
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure.
Dou et al. (US 20200320326 A1) - Systems and methods for determining at least one scanning parameter for a scanning by an imaging device (110) are provided. The methods may include obtaining a scout image of at least one portion of a subject (502), and determining, in the scout image, a region of interest (ROI) corresponding to the at least one portion of the subject (504). The methods may further include determining, based on the ROI, the at least one scanning parameter associated with the at least one portion of the subject for performing the scanning by the imaging device (506). Systems and methods for evaluating a scanning parameter are further provided. The methods may include determining a scanning parameter associated with the ROI (1606) and obtaining a reference scanning parameter associated with the ROI (1608). The methods may further include determining whether the scanning parameter needs to be adjusted by comparing the scanning parameter… Figs. 14, 15A, 16, Abstract.
Sharma et al. (US 20120134576 A1) - Presented is a method of automatically performing an action, based on graphical input. The method comprises: receiving, for a user, an input image; comparing the input image with the contents of a user-customized database comprising a plurality of records, each record representing a predefined class of image, wherein the user has previously associated records in the database with respective specified actions; attempting to recognize the image, based on the similarity of the input image to one of the predefined classes of image represented in the user-customised database; and if the image is recognized, performing the action previously associated by the user with the class… Figs. 1, 3, 4, Abstract.
Venkatachalam et al. (US 8204291 B2) - A method for processing a radiographic image of a scanned object is provided. The method comprises acquiring radiographic image data corresponding to a scanned object and identifying one or more regions of interest in the radiographic image data corresponding to the scanned object. The method further comprises performing an image-contrast comparison of the radiographic image data corresponding to the scanned object and one or more reference radiographic images, to identify one or more defects in the radiographic image data corresponding to the scanned object… Fig. 2, Abstract.
Venkatachalam et al. (US 20090066939 A1) - A method for automatically identifying defects in turbine engine blades is provided. The method comprises acquiring one or more radiographic images corresponding to one or more turbine engine blades and identifying one or more regions of interest from the one or more radiographic images. The method then comprises extracting one or more geometric features based on the one or more regions of interest and analyzing the one or more geometric features to identify one or more defects in the turbine engine blades… Figs. 2, 6, Abstract.
Roh et al. (US 20150206614 A1) - A radiographic apparatus may comprise: a radiation irradiating module configured to irradiate radiation to an object; and/or a processing module configured to automatically set a part of a region to which the radiation irradiating module is able to irradiate the radiation, to a region of interest, and further configured to determine at least one of a radiation irradiation position and a radiation irradiation zone of the radiation irradiating module based on the region of interest… Figs. 17, 18, Abstract.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEZAWIT N SHIMELES whose telephone number is (571)272-7663. The examiner can normally be reached M-F 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BEZAWIT NOLAWI SHIMELES/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673