DETAILED ATION
Status of the Claims
The filing dated 9/13/24 is entered. Claims 1-20 are pending.
Information Disclosure Statements
The information disclosure statement (IDS) submitted on 12/17/24 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 4-7, 9-11, 13-15, 17, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, US-20180106733, in view of Ivakhnenko, US-20100266204.
In regards to claim 1, Li discloses a system for three-dimensional object image projection and image augmentation (Abstract, generating a three-dimensional combined image; three-dimensional test image of a test item is combined with a three-dimensional article image of a11 article that is undergoing a radiation examination to generate the three-dimensional combined image), the system comprising: one or more computer processors (Par. 0104, 0107 processors); one or more graphics processing units (Par. 0104, 0107 graphics processing processors); one or more computer readable storage media (Par. 0104, 0107 storage media); and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors or at least one of the one or more graphics processing units, the stored program instructions including instructions (Par. 0104, 0107 processing instructions) to: retrieve an object image (Par. 0031, 0044, 0045 data structure may comprise a plurality (e.g. 10s, 100s, 1000s, etc.) of test item images, each representative of a different test item, and the 3D test image that is utilized may be selected at random); retrieve a background image (Par. 0004 acquiring a three-dimensional article image of the article via the radiation examination and acquiring a three-dimensional test image of the test item); determine one or more voids in the background image suitable for inserting the object image (Par. 0004, 0053 identifying, within the three-dimensional article image, a first group of voxels representative of object regions corresponding to objects within the article and a second group of voxels representative of void regions corresponding to voids within the article); insert the object image into the background image to create a projected image (Par 0004, 0044, 0053 when the degree of overlap is less than a specified degree, merging the three-dimensional test image with the three-dimensional article image to generate the three-dimensional combined image, where the three-dimensional combined image is representative of the test item being within the article at the first selection region during the radiation examination; the image insertion component 126 may be configured to insert a 3D test image of a weapon, explosive, or other threat item into a 3D article image of a benign bag to create a 3D combined image that appears to show a threat item within the bag; the image metric can correspond to CT value, which is based upon density information, z-effective information, or other information derivable from projection data generated from the radiation examination); and perform the image augmentation on the projected image to produce a realistic synthetic image (Par 0077, 0086, 0087 overlapping voxels of the 3D test image that overlap the first group of voxels can be weighted. In such an example, these overlapping voxels (e.g., of the 3D test image) can be weighted with the portion(s) of the first group of voxels (e.g., overlapped voxels) rather than replacing the portion of the first group of voxels. For example, one or more properties of these overlapping voxels of the 3D test image can be combined with one or more corresponding properties of the portion(s) of the first group of voxels that is overlapped; the threshold comprises three or more abutment locations 1400. Determining the number of abutment locations 1400 provides for a 3D combined image that is more realistic).
Li does not disclose expressly manipulate the object image to fit into the background image.
Ivakhnenko discloses manipulate the object image to fit into the background image (Par. 0015,0016 the TIP System can scale the size of the image of the threat as a function of the determined location of the threat in the image of the scanned object; the TIP system can apply the scaling factor to reduce or enlarge the size of the TIP image and insert the TIP image into tr1e image of the scanned object); insert the object image into the background image to create a projected image (Par. 0010, 0056 the scaling factor can be used to scale the TIP image data prior to combining it with the object image data, at 824, to produce the final image of the TIP inserted in the image of the object; project the image of the threat in to tt1e image of the object at a predefined or random location, randomly or at a predetermined time or number of objects scanned); perform the image augmentation on the projected image to produce a realistic synthetic Image (Par. 0010, 0045 provide the TIP transformation in real time, so as not to delay presentation of the scanner image and possibly indicate the presence of a TIP; the same image transformation method can be used for side view, as shown in FIG. 4. For the particular case of a scanning system, the Y-coordinate geo-corrected plane coincides with the tunnel T wall).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 10, Li discloses a method for three-dimensional object image projection and image augmentation (Abstract, generating a three-dimensional combined image; three-dimensional test image of a test item is combined with a three-dimensional article image of a11 article that is undergoing a radiation examination to generate the three-dimensional combined image), the method comprising: retrieving an object image (Par. 0031, 0044, 0045 data structure may comprise a plurality (e.g. 10s, 100s, 1000s, etc.) of test item images, each representative of a different test item, and the 3D test image that is utilized may be selected at random); retrieving a background image (Par. 0004 acquiring a three-dimensional article image of the article via the radiation examination and acquiring a three-dimensional test image of the test item); determining one or more voids in the background image suitable for inserting the object image (Par. 0004, 0053 identifying, within the three-dimensional article image, a first group of voxels representative of object regions corresponding to objects within the article and a second group of voxels representative of void regions corresponding to voids within the article); produce a realistic image (Par 0004, 0044, 0053 when the degree of overlap is less than a specified degree, merging the three-dimensional test image with the three-dimensional article image to generate the three-dimensional combined image, where the three-dimensional combined image is representative of the test item being within the article at the first selection region during the radiation examination; the image insertion component 126 may be configured to insert a 3D test image of a weapon, explosive, or other threat item into a 3D article image of a benign bag to create a 3D combined image that appears to show a threat item within the bag; the image metric can correspond to CT value, which is based upon density information, z-effective information, or other information derivable from projection data generated from the radiation examination); and inserting the object image into the background image to create a projected image (Par 0004, 0044, 0053 when the degree of overlap is less than a specified degree, merging the three-dimensional test image with the three-dimensional article image to generate the three-dimensional combined image, where the three-dimensional combined image is representative of the test item being within the article at the first selection region during the radiation examination; the image insertion component 126 may be configured to insert a 3D test image of a weapon, explosive, or other threat item into a 3D article image of a benign bag to create a 3D combined image that appears to show a threat item within the bag; the image metric can correspond to CT value, which is based upon density information, z-effective information, or other information derivable from projection data generated from the radiation examination).
Ivakhnenko discloses manipulating the object image to fit into the background image (Par. 0015,0016 the TIP System can scale the size of the image of the threat as a function of the determined location of the threat in the image of the scanned object; the TIP system can apply the scaling factor to reduce or enlarge the size of the TIP image and insert the TIP image into tr1e image of the scanned object); inserting the object image into the background image to create a projected image (Par. 0010, 0056 the scaling factor can be used to scale the TIP image data prior to combining it with the object image data, at 824, to produce the final image of the TIP inserted in the image of the object; project the image of the threat in to tt1e image of the object at a predefined or random location, randomly or at a predetermined time or number of objects scanned); performing the image augmentation on the projected image to produce a realistic synthetic Image (Par. 0010, 0045 provide the TIP transformation in real time, so as not to delay presentation of the scanner image and possibly indicate the presence of a TIP; the same image transformation method can be used for side view, as shown in FIG. 4. For the particular case of a scanning system, the Y-coordinate geo-corrected plane coincides with the tunnel T wall).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 17, Li discloses system for three-dimensional object image projection and image augmentation (Abstract, generating a three-dimensional combined image; three-dimensional test image of a test item is combined with a three-dimensional article image of a11 article that is undergoing a radiation examination to generate the three-dimensional combined image), the system comprising: one or more graphics processing units (Par. 0104, 0107 graphics processing processors), the one or more graphics processing units configured to: retrieve an object image (Par. 0031, 0044, 0045 data structure may comprise a plurality (e.g. 10s, 100s, 1000s, etc.) of test item images, each representative of a different test item, and the 3D test image that is utilized may be selected at random); retrieve a background image (Par. 0004 acquiring a three-dimensional article image of the article via the radiation examination and acquiring a three-dimensional test image of the test item); determine one or more voids in the background image suitable for inserting the object image (Par. 0004, 0053 identifying, within the three-dimensional article image, a first group of voxels representative of object regions corresponding to objects within the article and a second group of voxels representative of void regions corresponding to voids within the article); insert the object image into the background image to create a projected image (Par 0004, 0044, 0053 when the degree of overlap is less than a specified degree, merging the three-dimensional test image with the three-dimensional article image to generate the three-dimensional combined image, where the three-dimensional combined image is representative of the test item being within the article at the first selection region during the radiation examination; the image insertion component 126 may be configured to insert a 3D test image of a weapon, explosive, or other threat item into a 3D article image of a benign bag to create a 3D combined image that appears to show a threat item within the bag; the image metric can correspond to CT value, which is based upon density information, z-effective information, or other information derivable from projection data generated from the radiation examination); and perform the image augmentation on the projected image to produce a realistic image (Par 0077, 0086, 0087 overlapping voxels of the 3D test image that overlap the first group of voxels can be weighted. In such an example, these overlapping voxels (e.g., of the 3D test image) can be weighted with the portion(s) of the first group of voxels (e.g., overlapped voxels) rather than replacing the portion of the first group of voxels. For example, one or more properties of these overlapping voxels of the 3D test image can be combined with one or more corresponding properties of the portion(s) of the first group of voxels that is overlapped; the threshold comprises three or more abutment locations 1400. Determining the number of abutment locations 1400 provides for a 3D combined image that is more realistic).
Li does not disclose expressly manipulate the object image to fit into the background image.
Ivakhnenko discloses manipulate the object image to fit into the background image (Par. 0015,0016 the TIP System can scale the size of the image of the threat as a function of the determined location of the threat in the image of the scanned object; the TIP system can apply the scaling factor to reduce or enlarge the size of the TIP image and insert the TIP image into tr1e image of the scanned object); insert the object image into the background image to create a projected image (Par. 0010, 0056 the scaling factor can be used to scale the TIP image data prior to combining it with the object image data, at 824, to produce the final image of the TIP inserted in the image of the object; project the image of the threat in to tt1e image of the object at a predefined or random location, randomly or at a predetermined time or number of objects scanned); perform the image augmentation on the projected image to produce a realistic synthetic Image (Par. 0010, 0045 provide the TIP transformation in real time, so as not to delay presentation of the scanner image and possibly indicate the presence of a TIP; the same image transformation method can be used for side view, as shown in FIG. 4. For the particular case of a scanning system, the Y-coordinate geo-corrected plane coincides with the tunnel T wall).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 2, Li and Ivankhnenko, as combined above, disclose the object image is retrieved from a database (Li Par. 0031, 0044, 0045 data structure, i.e. database).
Li and Ivankhnenko do not disclose expressly the background image is retrieved from a database.
However, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the object and background image can be retrieved from a database. The motivation for doing so would have been to provide non real-time processing of image data.
In regards to claim 11, Li and Ivankhnenko, as combined above, disclose the object image is retrieved from a database (Li Par. 0031, 0044, 0045 data structure, i.e. database).
Li and Ivankhnenko do not disclose expressly the background image is retrieved from a database.
However, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the object and background image can be retrieved from a database. The motivation for doing so would have been to provide non real-time processing of image data.
In regards to claim 4, Li and Ivankhnenko, as combined above, disclose
determine the one or more voids in the background image suitable for inserting the object image further comprises: threshold the background image using a computed tomography (CT) value above a predetermined level (Li Par. 0047, 0052, 0053 the image insertion component 126 is configured identify one or more groups of voxels having an image metric (e.g. a CT value) that is below a specified threshold, and to define at least one of the one or more groups of voxels as the selection region; the specified threshold may be selected based upon what types of objects are to be effectively considered a void and what types of objects are to be considered objects that the threat item cannot overlap without making the combined image appear unrealistic (e.g. a gun occupying a same space as a heel of a shoe may appear unrealistic, thus making the gun more easily detectable) to remove air (Ivakhnenko Par. 0033 the system can extend the boundary of the threat (using air) to the shape of a rectangular box that encompasses the threat object).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 13, Li and Ivankhnenko, as combined above, disclose
determining the one or more voids in the background image suitable for inserting the object image further comprises: threshold the background image using a computed tomography (CT) value above a predetermined level (Li Par. 0047, 0052, 0053 the image insertion component 126 is configured identify one or more groups of voxels having an image metric (e.g. a CT value) that is below a specified threshold, and to define at least one of the one or more groups of voxels as the selection region; the specified threshold may be selected based upon what types of objects are to be effectively considered a void and what types of objects are to be considered objects that the threat item cannot overlap without making the combined image appear unrealistic (e.g. a gun occupying a same space as a heel of a shoe may appear unrealistic, thus making the gun more easily detectable) to remove air (Ivakhnenko Par. 0033 the system can extend the boundary of the threat (using air) to the shape of a rectangular box that encompasses the threat object).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 19, Li and Ivankhnenko, as combined above, disclose
determine the one or more voids in the background image suitable for inserting the object image further comprises: threshold the background image using a computed tomography (CT) value above a predetermined level (Li Par. 0047, 0052, 0053 the image insertion component 126 is configured identify one or more groups of voxels having an image metric (e.g. a CT value) that is below a specified threshold, and to define at least one of the one or more groups of voxels as the selection region; the specified threshold may be selected based upon what types of objects are to be effectively considered a void and what types of objects are to be considered objects that the threat item cannot overlap without making the combined image appear unrealistic (e.g. a gun occupying a same space as a heel of a shoe may appear unrealistic, thus making the gun more easily detectable) to remove air (Ivakhnenko Par. 0033 the system can extend the boundary of the threat (using air) to the shape of a rectangular box that encompasses the threat object).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 5, Ivakhnenko further discloses wherein manipulate the object image to fit into the background image further comprises: manipulate the object image using any standard operator facing controls, wherein the standard operator facing controls include at least one of a rotation, a zoom in, or a zoom out (Par. 0015 the TlP System can scale the size of the image of the threat as a function of the determined location of the threat in the image of the scanned object).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 14, Ivakhnenko further discloses wherein manipulate the object image to fit into the background image further comprises: manipulate the object image using any standard operator facing controls, wherein the standard operator facing controls include at least one of a rotation, a zoom in, or a zoom out (Par. 0015 the TlP System can scale the size of the image of the threat as a function of the determined location of the threat in the image of the scanned object).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 6, Ivakhnenko further discloses the image augmentation includes at least one of cropping objects, masking materials, geometric transforms of an object shape, and material replacement (Par. 0033, 0035 the process for transforming the TIP Image of an object taken at point 01 to point 03. The first transformation is the parallel translation of rectangular box located at the point O1 ln the direction of vector Ob O1 to point O2 where the distance from Ob to O2 Is the same as the distance from Ob to 03 (or where the distance |ObO2| is the same as the distance |Ob03|; the system can transform the image taken of the threat at point O1 to the arbitrary chosen point O3 in the tunnel and to obtain the X-coordinate geo-corrected image of the threat at this point. The exact threat geometry can be rather complex and the requirement to know the geometry for image transformation can be burdensome. To overcome this problem, in accordance with one embodiment of the invention, the system can extend the boundary of the threat (using air) to the shape of a rectangular box that encompasses the threat object. From the physical point of view, the system does not change, but from the mathematical point of view, the problem becomes simplified. The extended threat has a rectangular shape and the problem is reduced to obtaining the relation between
the geo-corrected images for rectangular boxes).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 15, Ivakhnenko further discloses the image augmentation includes at least one of cropping objects, masking materials, geometric transforms of an object shape, and material replacement (Par. 0033, 0035 the process for transforming the TIP Image of an object taken at point 01 to point 03. The first transformation is the parallel translation of rectangular box located at the point O1 ln the direction of vector Ob O1 to point O2 where the distance from Ob to O2 Is the same as the distance from Ob to 03 (or where the distance |ObO2| is the same as the distance |Ob03|; the system can transform the image taken of the threat at point O1 to the arbitrary chosen point O3 in the tunnel and to obtain the X-coordinate geo-corrected image of the threat at this point. The exact threat geometry can be rather complex and the requirement to know the geometry for image transformation can be burdensome. To overcome this problem, in accordance with one embodiment of the invention, the system can extend the boundary of the threat (using air) to the shape of a rectangular box that encompasses the threat object. From the physical point of view, the system does not change, but from the mathematical point of view, the problem becomes simplified. The extended threat has a rectangular shape and the problem is reduced to obtaining the relation between
the geo-corrected images for rectangular boxes).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 20, Ivakhnenko further discloses the image augmentation includes at least one of cropping objects, masking materials, geometric transforms of an object shape, and material replacement (Par. 0033, 0035 the process for transforming the TIP Image of an object taken at point 01 to point 03. The first transformation is the parallel translation of rectangular box located at the point O1 ln the direction of vector Ob O1 to point O2 where the distance from Ob to O2 Is the same as the distance from Ob to 03 (or where the distance |ObO2| is the same as the distance |Ob03|; the system can transform the image taken of the threat at point O1 to the arbitrary chosen point O3 in the tunnel and to obtain the X-coordinate geo-corrected image of the threat at this point. The exact threat geometry can be rather complex and the requirement to know the geometry for image transformation can be burdensome. To overcome this problem, in accordance with one embodiment of the invention, the system can extend the boundary of the threat (using air) to the shape of a rectangular box that encompasses the threat object. From the physical point of view, the system does not change, but from the mathematical point of view, the problem becomes simplified. The extended threat has a rectangular shape and the problem is reduced to obtaining the relation between
the geo-corrected images for rectangular boxes).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image generation of Li can include the image manipulation of Ivakhnenko. The motivation for doing so would have been to provide realistic threat images.
In regards to claim 7, Li and Ivakhnenko, as combined above, disclose object image projection and the image augmentation is performed by the one or more graphics processing units (Li Par. 0104, 0107 graphics processing processors).
In regards to claim 9, Li and Ivakhnenko, as combined above, disclose sending the synthetic image to a user (Par. 0048 received information/images may be provided by the terminal 130 for display on a monitor 132 to a user 134 (e.g., security personnel, medical personnel, etc.). In this way, the user 134 can inspect the image(s) to identify areas of interest within the article 104 and/or the user 134 can be tested by displaying a combined image).
Claim(s) 3, 12, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, US-20180106733, and Ivakhnenko, US-20100266204, as combined above in regards to claims 1, 10, and 17, in further view of Agrawal, US-10096122.
In regards to claim 3, Li and Ivakhnenko do not disclose expressly retrieve the object image further comprises: select the object image from a source image; isolate the object image from other objects in source image; and extract the object image from the source image.
Agrawal discloses retrieve the object image further comprises: select the object image from a source image (Col 2, 21-45; Fig. 7-8); isolate the object image from other objects in source image (Col 2, 21-45; Fig. 7-8; it is desired to capture an image of a person holding an article of clothing 115 and then segment the image data to produce a cropped image 103 of the article of clothing 115 with the other objects around the clothing 115 the person holding the clothing, the clothing hanger, and/or any features in the background of the image) removed); and extract the object image from the source image (Col 2, 21-45; Fig. 7-8; it is desired to capture an image of a person holding an article of clothing 115 and then segment the image data to produce a cropped image 103 of the article of clothing 115 with the other objects around the clothing 115 the person holding the clothing, the clothing hanger, and/or any features in the background of the image) removed).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image processing of Li and Ivakhnenko can include further processing such as Agrawal discloses. The motivation for doing so would have been to automatically and quickly produce threat images.
In regards to claim 12, Li and Ivakhnenko do not disclose expressly retrieving the object image further comprises: selecting the object image from a source image; isolating the object image from other objects in source image; and extracting the object image from the source image.
Agrawal discloses retrieving the object image further comprises: selecting the object image from a source image (Col 2, 21-45; Fig. 7-8); isolating the object image from other objects in source image (Col 2, 21-45; Fig. 7-8; it is desired to capture an image of a person holding an article of clothing 115 and then segment the image data to produce a cropped image 103 of the article of clothing 115 with the other objects around the clothing 115 the person holding the clothing, the clothing hanger, and/or any features in the background of the image) removed); and extracting the object image from the source image (Col 2, 21-45; Fig. 7-8; it is desired to capture an image of a person holding an article of clothing 115 and then segment the image data to produce a cropped image 103 of the article of clothing 115 with the other objects around the clothing 115 the person holding the clothing, the clothing hanger, and/or any features in the background of the image) removed).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image processing of Li and Ivakhnenko can include further processing such as Agrawal discloses. The motivation for doing so would have been to automatically and quickly produce threat images.
In regards to claim 18, Li and Ivakhnenko do not disclose expressly retrieve the object image further comprises: select the object image from a source image; isolate the object image from other objects in source image; and extract the object image from the source image.
Agrawal discloses retrieve the object image further comprises: select the object image from a source image (Col 2, 21-45; Fig. 7-8); isolate the object image from other objects in source image (Col 2, 21-45; Fig. 7-8; it is desired to capture an image of a person holding an article of clothing 115 and then segment the image data to produce a cropped image 103 of the article of clothing 115 with the other objects around the clothing 115 the person holding the clothing, the clothing hanger, and/or any features in the background of the image) removed); and extract the object image from the source image (Col 2, 21-45; Fig. 7-8; it is desired to capture an image of a person holding an article of clothing 115 and then segment the image data to produce a cropped image 103 of the article of clothing 115 with the other objects around the clothing 115 the person holding the clothing, the clothing hanger, and/or any features in the background of the image) removed).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image processing of Li and Ivakhnenko can include further processing such as Agrawal discloses. The motivation for doing so would have been to automatically and quickly produce threat images.
Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li, US-20180106733, and Ivakhnenko, US-20100266204, as combined above in regards to claims 1 and 10 in further view of Bennett, US-20180096497.
In regards to claim 8, Li and Ivakhnenko do not disclose expressly statistically validate the synthetic image against an X-ray system native image.
Bennett discloses statistically validate the synthetic image against an X-ray system native image (Par. 0019, 0021 the mobile computing device may implement a local image validation module. Illustratively, the image validation module may determine whether an image will meet the validation criteria of the networked computing service prior to uploading; enhancement criteria applied by the image enhancement module, or any combination thereof. As an example, the image enhancement module may detect that the original digital image).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image processing of Li and Ivakhnenko can include further processing such as Bennett discloses. The motivation for doing so would have been to improve efficiency and quickly produce threat images.
In regards to claim 8, Li and Ivakhnenko do not disclose expressly validating the realistic image statistically against an X-ray system native image.
Bennett discloses validating the realistic image statistically against an X-ray system native image (Par. 0019, 0021 the mobile computing device may implement a local image validation module. Illustratively, the image validation module may determine whether an image will meet the validation criteria of the networked computing service prior to uploading; enhancement criteria applied by the image enhancement module, or any combination thereof. As an example, the image enhancement module may detect that the original digital image).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the image processing of Li and Ivakhnenko can include further processing such as Bennett discloses. The motivation for doing so would have been to improve efficiency and quickly produce threat images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CORY A ALMEIDA whose telephone number is (571)270-3143. The examiner can normally be reached M-Th 9AM-730PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nitin (Kumar) Patel can be reached at (571) 272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CORY A ALMEIDA/ Primary Examiner, Art Unit 2628 2/27/26