Prosecution Insights
Last updated: April 19, 2026
Application No. 18/056,275

METHOD AND APPARATUS FOR IDENTIFYING ITEMS UNDER A PERSON'S CLOTHING

Non-Final OA §103
Filed
Nov 17, 2022
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Motorola Solutions Inc.
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered. Claim Status Claims 1-13 are pending for examination in the application filed 12/17/2025. Claims 1, 3, 8, 11, and 13 have been amended. Response to Arguments and Amendments Applicant's arguments filed 12/17/2025 have been fully considered but they are not persuasive. Applicant argues on page 5-6 that Sun does not teach the limitation “a scenario modeler configured to receive the image and the metadata and output multiple generated images based on the metadata, wherein the multiple generated images each comprise a different scenario of an object under clothing in a physical environment simulated by a 3D rendering engine of the scenario modeler according to the metadata for comparison with the image of the anomaly”. As stated on page 6 of the Final Rejection filed 09/19/2025: PNG media_image1.png 724 696 media_image1.png Greyscale As shown in Figure 1, the original received image is (a) and the multiple generated images each rendered comprising a different scenario of an object under clothing (differing parameters) are (b), (c), and (d). As described on page 448 of Sun, “Figure 1 shows an example in which a toy gun, a knife and a grenade are hidden under a textured cloth. From the acquired luminance image (figure 1(a)) which represents that perceived by the human eye, or conventional surveillance camera, it is not easy to identify the objects hidden under the cloth. However, they become significantly more obvious in the synthetic rendered images as shown in figure 1(b) and 1(c), which correspond to different illumination setting. So in addition to monitoring conventional images acquired from a camera(s) directly, the rendered images provide an additional very different modality of data which can reveal details difficult to find using other normal approaches”. Furthermore, Sun also explains on page 448: “The artificially shaded images rendered in such way are free from the confusion of textured reflectance (i.e. camouflage) due to the constant E representing the composite albedo. As such, the rendered shaded images are composed from the object’s shape information only. The perception of the object’s shape is largely enhanced when compared to the same scene camouflaged by some complicated covering”. Please see below for the updated 35 USC § 103 rejections as facilitated by the newly added amendments. Claim Objections Claim 13 is objected to because the amendment does not comply with the requirements of 37 CFR 1.121(c) because it is marked (Currently Amended) but does not include any claim text with markings. Amendments to the claims filed on or after July 30, 2003 must comply with 37 CFR 1.121(c)(2) which states: All claims being currently amended in an amendment paper shall be presented in the claim listing, indicate a status of “currently amended,” and be submitted with markings to indicate the changes that have been made relative to the immediate prior version of the claims. The text of any added subject matter must be shown by underlining the added text. The text of any deleted matter must be shown by strike-through except that double brackets placed before and after the deleted characters may be used to show deletion of five or fewer consecutive characters. The text of any deleted subject matter must be shown by being placed within double brackets if strike-through cannot be easily perceived. Only claims having the status of “currently amended,” or “withdrawn” if also being amended, shall include markings. If a withdrawn claim is currently amended, its status in the claim listing may be identified as “withdrawn—currently amended.” Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Itsuji (JP2021128145A) in view of DeAngelus (US20220334243A1) and Sun (Sun, Jiuai, et al. "Concealed object perception and recognition using a photometric stereo strategy." International Conference on Advanced Concepts for Intelligent Vision Systems. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009). Regarding claim 1, Itsuji teaches an apparatus comprising: a camera configured to provide an image of an anomaly, wherein the anomaly comprises an object under fabric ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. [0014] In the present embodiment, it is assumed that the subject 106 is a person, and a concealed object 106a such as a gun or a knife is hidden under the covering 106b which is clothing); a video analytics system (image processing unit 105) configured to receive the image of the anomaly and output at least an angle to the anomaly ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. The visible image 944 includes shape information. [0123] Depending on the surface shape of the subject, the change in the angle of the surface is complicated and differs for each subject. Therefore, the preferable positional relationship between the illumination unit, which is a light source, and the detection unit may change for each subject. Further, when the subject moves, the positional relationship between the illumination unit, which is a light source, and the detection unit may change from moment to moment); a comparison engine (determination unit 945) configured to receive the image and the multiple generated images and determine a best generated image from the multiple generated images ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. [0048] If there is no concealed object 106a whose terahertz image matches the shape data 1210 by the determination in step S1203, the process returns to step S1201. Then, another condition, for example, condition 1232 is selected as the shooting condition, and the image data is acquired again. If the terahertz image mainly contains the information of the concealed object 106a that matches the shape data 1210 by the determination in step S1203, the shooting is terminated. [0118] Coverage image 1401 includes a plurality of terahertz images of different coverings. Further, the covering image 1401 includes a plurality of terahertz images obtained by photographing the covering under the operation of different illumination units 101. The concealed object image 1402 includes a plurality of terahertz images obtained by photographing different concealed objects. In addition, the covering image 1401 includes a plurality of terahertz images obtained by photographing a concealed object under the operation of different illumination units 101). Itsuji does not teach output at least an angle from the camera to the anomaly; an interface configured to receive the angle and create metadata comprising the angle, and output the metadata along with the image. DeAngelus, in the same field of endeavor of detecting concealed objects, teaches output at least an angle from the camera to the anomaly ([0173] By pairing a collocated RF imaging sensor with a RGB/RGB-D camera, or sensor, it is possible to extract information about people and objects in the scene using computer vision techniques. [0120] a test subject, with or without an object on their person, moves throughout the field of view of the RF imaging sensor, and can stop at various positions within the field of view, and the RF imaging sensor can collect RF images in the snapshot mode of operation. Positions can be defined as a lateral offset from the center of the field of view (e.g., +1 foot, +2 feet), and range, or distance between the subject and/or object in the scene, to the RF imaging sensor (e.g., 4 feet, 6, feet, 8 feet). The angle of the individual with respect to the RF imaging sensor can also be varied (e.g., the dorsal side of the individual is facing the RF imaging sensor, the ventral side of the individual is facing the RF imaging sensor, or the individual is at an angle between facing the RF imaging sensor or facing away from the RF imaging sensor); an interface (multi-view imaging sensor system) configured to receive the angle and create metadata comprising the angle ([0055] FIG. 1B illustrates a process flow 100B illustrating fusing radio frequency (RF) signal, or volumetric data, with color or depth image data (e.g., red, green, and blue (RGB) data; or red, green, blue depth (RGB-D) data) by an embodiment of the imaging sensor system. [0056] FIG. 1B includes data blocks 151, 153, and 105…As an example, RGB/RGB-D data from data block 151 can represent data associated with a RGB or RGB-D images captured by one or more cameras of the imaging sensor system. [0057] ARF volumetric data from data block 153 can represent imaging data corresponding to RF signals generated by an RF imaging sensors. For instance, the imaging data can include one or more three-dimensional (3-D) (RF-based) images, and/or information about the one or more 3-D images), and output the metadata along with the image ([0058] The processor modules process data corresponding to RGB/RGB-D data 151, RF volumetric data 153, component information data 155, or a combination thereof. [0058] coordinate alignment module 152 can include one or more computer executable instructions that are executed by one or more processors to orient one or more first objects in an image generated by RGB/RGB-D data from RGB/RGB-D cameras with one or more second objects in an image generated by the RF volumetric data from RF imaging sensors. Fig. 1B Coordinate Alignment 152 is output to Component Models). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the apparatus of Itsuji a with the teachings of DeAngelus to determine angles of an anomaly to a camera because "a subject… can stop at various positions within the field of view" [DeAngelus 0120] and to and output the image and metadata because "Aligning the RGB/RGB-D image with the RF volumetric image ensures that when a concealed object of interest is detected in an image generated by the RF imaging sensors, a user viewing a fused RGB/RGB-D data and the RF imaging sensor data can accurately determine where the concealed object of interest is located on the body of the person in the RGB/RGB-D image" [DeAngelus 0067]. Itsuji does not teach a scenario modeler configured to receive the image and the metadata and output multiple generated images based on the metadata, wherein the multiple generated images each comprise a different scenario of an object under clothing in a physical environment simulated by a 3D rendering engine of the scenario modeler according to the metadata for comparison with the image of the anomaly. Sun, in the same field of endeavor of hidden object detection and rendering, teaches a scenario modeler configured to receive the image and the metadata and output multiple generated images based on the metadata ([pg. 446 para. 5] Images from same systems but with the addition of structured illumination can recover surface normal information which can be used to produce albedo free rendered images, and detect subtle surface features caused by concealed threats, such as weapons, under the clothing or any other camouflaged cover), wherein the multiple generated images each comprise a different scenario of an object under clothing in a physical environment simulated by a 3D rendering engine of the scenario modeler according to the metadata for comparison with the image of the anomaly ([pg. 448] Figure 1 shows an example in which a toy gun, a knife and a grenade are hidden under a textured cloth. From the acquired luminance image (figure 1(a)) which represents that perceived by the human eye, or conventional surveillance camera, it is not easy to identify the objects hidden under the cloth. However, they become significantly more obvious in the synthetic rendered images as shown in figure 1(b) and 1(c), which correspond to different illumination setting. So in addition to monitoring conventional images acquired from a camera(s) directly, the rendered images provide an additional very different modality of data which can reveal details difficult to find using other normal approaches…The artificially shaded images rendered in such way are free from the confusion of textured reflectance (i.e. camouflage) due to the constant E representing the composite albedo. As such, the rendered shaded images are composed from the object’s shape information only. The perception of the object’s shape is largely enhanced when compared to the same scene camouflaged by some complicated covering). PNG media_image2.png 705 819 media_image2.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the apparatus of Itsuji a with the teachings of Sun to generate images comprising a different scenario of an object under clothing with rendering because "The concealed object can be metallic and plastic or ceramic in the form of handguns and knives or explosives, which may not be detectable with any one of the sensing modalities mentioned above" [Sun pg. 446 para. 5]. Regarding claim 2, Itsuji, DeAngelus, and Sun teach the apparatus of claim 1. DeAngelus teaches a graphical user interface configured to display a type of object modeled in the best generated image ([0028] FIG. 13 illustrates a ground truth labeling tool graphical user interface of an imaging sensor system. [0084] The visual products, can include, but are not limited to a visual display of a segmented portion of a RGB/RGB-D image according to different portions of the body, and/or a visual display of a segmented portion of a detected object in a RF image, and/or the display of an anomaly detected in the RF image). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the apparatus of Itsuji a with the teachings of DeAngelus to use a graphical user interface to display the image to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Regarding claim 3, Itsuji teaches a method ([0124] FIG. 15 is a flow diagram illustrating an operation flow of the present embodiment. The present embodiment is characterized by the method of operation of the illumination unit 101) comprising the steps of: capturing an image of an anomaly, wherein the anomaly comprises an object under fabric ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. [0014] In the present embodiment, it is assumed that the subject 106 is a person, and a concealed object 106a such as a gun or a knife is hidden under the covering 106b which is clothing); determining angles and distances to the anomaly ([0038] The visible image 944 includes shape information. [0123] Depending on the surface shape of the subject, the change in the angle of the surface is complicated and differs for each subject. Therefore, the preferable positional relationship between the illumination unit, which is a light source, and the detection unit may change for each subject. Further, when the subject moves, the positional relationship between the illumination unit, which is a light source, and the detection unit may change from moment to moment); PNG media_image3.png 591 487 media_image3.png Greyscale using the angles and distances to the anomaly (shape data) to generate images of the anomaly ([0101] the image processing unit 105 can also determine the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. The visible image 944 can also be called shape data. [0048] If there is no concealed object 106a whose terahertz image matches the shape data 1210 by the determination in step S1203, the process returns to step S1201. Then, another condition, for example, condition 1232 is selected as the shooting condition, and the image data is acquired again), wherein the generated images of the anomaly are modeled with different scenarios of various items under various types of clothing ([0118] Coverage image 1401 includes a plurality of terahertz images of different coverings. Further, the covering image 1401 includes a plurality of terahertz images obtained by photographing the covering under the operation of different illumination units 101. The concealed object image 1402 includes a plurality of terahertz images obtained by photographing different concealed objects. In addition, the covering image 1401 includes a plurality of terahertz images obtained by photographing a concealed object under the operation of different illumination units 101); determining a best generated image from the generated images, wherein the best generated image is an image from the generated images that best fits the captured image of the anomaly ([0048] Then, another condition, for example, condition 1232 is selected as the shooting condition, and the image data is acquired again. If the terahertz image mainly contains the information of the concealed object 106a that matches the shape data 1210 by the determination in step S1203, the shooting is terminated). Itsuji does not teach determining angles and distances from a camera to the anomaly; and displaying modeling parameters used for the best generated image. DeAngelus, in the same field of endeavor of detecting concealed objects, teaches determining angles and distances from a camera to the anomaly ([0173] By pairing a collocated RF imaging sensor with a RGB/RGB-D camera, or sensor, it is possible to extract information about people and objects in the scene using computer vision techniques. [0120] a test subject, with or without an object on their person, moves throughout the field of view of the RF imaging sensor, and can stop at various positions within the field of view, and the RF imaging sensor can collect RF images in the snapshot mode of operation. Positions can be defined as a lateral offset from the center of the field of view (e.g., +1 foot, +2 feet), and range, or distance between the subject and/or object in the scene, to the RF imaging sensor (e.g., 4 feet, 6, feet, 8 feet). The angle of the individual with respect to the RF imaging sensor can also be varied (e.g., the dorsal side of the individual is facing the RF imaging sensor, the ventral side of the individual is facing the RF imaging sensor, or the individual is at an angle between facing the RF imaging sensor or facing away from the RF imaging sensor); and displaying modeling parameters used for the best generated image ([0053] Computer(s) 105 can ingest the RF image from RF imaging system 101, via connection 193, along with the one or more RGB images and the one or more RGB-D images from Camera Imaging System 103, via connection 192, and generate a composite image of the RF image and the one or more RGB images and the one or more RGB-D images. [0235] The automated alerts 2406 include a RF image 2422 and a bounding box 2432 around a detected object of interest. In this case the object is a thermos. One or more of the processors can characterize the detected object based on the size, shape, material, and/or any other parameter associated with objects, and compare the parameters to other objects with the same or similar parameters and alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat. [0085] A processor can also cause a visual alert to be displayed and/or an auditory alert to be sounded in response to a processor comparing the detected object to known dangerous or lethal objects, and determining that the object is a weapon that can be used in a dangerous or lethal manner). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of DeAngelus to determine angles and distances of an anomaly to a camera because "a subject… can stop at various positions within the field of view" [DeAngelus 0120] and to display parameters "in response to determining that a detected object belongs to a class of a dangerous or potentially lethal objects" [DeAngelus 0085]. Itsuji does not teach generate images of the anomaly in a simulated physical environment, wherein the generated images of the anomaly are modeled with a 3D rendering engine to simulate interaction with the physical environment. Sun, in the same field of endeavor of hidden object detection and rendering, teaches generate images of the anomaly in a simulated physical environment, wherein the generated images of the anomaly are modeled with a 3D rendering engine to simulate interaction with the physical environment ([Conclusions pg. 453-454] A photometric stereo based strategy is proposed for the detection of concealed objects as an add-on function to current monitoring or surveillance systems. The new strategy has the capability to separate the surface reflectance or albedo to reveal true 3D shape information. The combined effect is to suppress surface colour and to enhance or make more apparent the 3D quality of the surface. The approach is low cost, needing only the addition of some of extra illumination. By rendering shaded images free from albedo effects, subtle geometrical details become visible as the resultant virtual images are related only to the surface normal information. The direction and intensity of the synthetic illumination used in the virtual image can be altered interactively allowing optimal results to be achieved). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of Sun to generate images of the anomaly with 3D rendering because "The concealed object can be metallic and plastic or ceramic in the form of handguns and knives or explosives, which may not be detectable with any one of the sensing modalities mentioned above" [Sun pg. 446 para. 5]. Regarding claim 4, Itsuji, DeAngelus, and Sun teach the method of claim 3. DeAngelus teaches wherein the modeling parameters comprises a particular item under a particular type of clothing ([0046] FIG. 24 illustrates different categories of output data products in the form of visual products or automated alert products, in accordance with exemplary embodiments of the present disclosure. [0239] In some embodiments, instead of one or more of the processors just detecting a concealed object of interest, one or more of the processors can segment portions of the scene of the RF image (scene segmentation 2501) by executing one or more computer executable instructions that cause the RF object detection architecture instruction set 2511 to segment the RF image into portions that include a person, or person of interest, luggage such as a backpack or roller bag, a random item in the scene such a box, or an item of clothing such as a coat. One or more of the processors can apply a segmentation mask around each of the segmented portions of the RF image. [0235] Automated alerts 2406 include characterization 2402 alerts and threat detection 2404 alerts. The automated alerts 2406 include a RF image 2422 and a bounding box 2432 around a detected object of interest. In this case the object is a thermos. One or more of the processors can characterize the detected object based on the size, shape, material, and/or any other parameter associated with objects, and compare the parameters to other objects with the same or similar parameters and alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat). PNG media_image4.png 649 771 media_image4.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of DeAngelus to display parameters corresponding to a particular item under a particular type of clothing to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Regarding claim 5, Itsuji, DeAngelus, and Sun teach the method of claim 3. Itsuji further teaches wherein the anomaly comprises a weapon under clothing ([0014] In the present embodiment, it is assumed that the subject 106 is a person, and a concealed object 106a such as a gun or a knife is hidden under the covering 106b which is clothing). Regarding claim 6, Itsuji, DeAngelus, and Sun teach the method of claim 3. DeAngelus teaches wherein the step of determining the best generated image comprises the step of performing a pixel-based comparison method, a mean-square error comparison, or a structural simulation similarity index measure ([0061] An object detected in the RGB/RGB-D image can be aligned with an objected detected in the RF image, based at least in part on one or more of the following parameters. A size of a RGB image and a RGB-D image (expressed in units of pixels), a field of view of the RGB image and the RGB-D image (expressed in units of pixels), a center of the RGB image and the RGB-D image (i.e., a reference coordinate in a scene of the RGB/RGB-D image denoted by (0,0) is located) (expressed in units of pixels)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of DeAngelus to use a pixel-based comparison method because "Aligning the RGB/RGB-D image with the RF volumetric image ensures that when a concealed object of interest is detected in an image generated by the RF imaging sensors, a user viewing a fused RGB/RGB-D data and the RF imaging sensor data can accurately determine where the concealed object of interest is located on the body of the person in the RGB/RGB-D image" [DeAngelus 0067]. Regarding claim 7, Itsuji, DeAngelus, and Sun teach the method of claim 3. DeAngelus teaches displaying the image of the anomaly and the best generated image ([0046] FIG. 24 illustrates different categories of output data products in the form of visual products or automated alert products, in accordance with exemplary embodiments of the present disclosure. [0084] The visual products, can include, but are not limited to a visual display of a segmented portion of a RGB/RGB-D image according to different portions of the body, and/or a visual display of a segmented portion of a detected object in a RF image, and/or the display of an anomaly detected in the RF image). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of DeAngelus to display the image of the anomaly and generated image to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Regarding claim 8, Itsuji teaches an interface comprising: logic circuitry ([0071] The transmitting unit 101a has a housing 200. Each of the plurality of transmitting elements 211a, 211b, 211c, and 211d is installed on the support substrate 201 of the housing 200. The transmitting unit 101a has a control circuit 202. The control circuit 202 is provided in the housing 200, and in the present embodiment, the control circuit 202 is provided on one surface constituting the outer shape of the housing 200. The control circuit 202 is electrically connected to the control unit 103, and controls the operations of the plurality of transmitting elements 211a, 211b, 211c, and 211d in response to the control signal from the control unit 103) configured to: receive an image of an anomaly from a camera ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. [0014] In the present embodiment, it is assumed that the subject 106 is a person, and a concealed object 106a such as a gun or a knife is hidden under the covering 106b which is clothing); receive angles and distances between the anomaly ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. The visible image 944 includes shape information. [0123] Depending on the surface shape of the subject, the change in the angle of the surface is complicated and differs for each subject. Therefore, the preferable positional relationship between the illumination unit, which is a light source, and the detection unit may change for each subject. Further, when the subject moves, the positional relationship between the illumination unit, which is a light source, and the detection unit may change from moment to moment); send the generated images to a comparison engine; send the image of the anomaly to the comparison engine (([0101] the image processing unit 105 can also determine the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. The visible image 944 can also be called shape data. [0048] Then, another condition, for example, condition 1232 is selected as the shooting condition, and the image data is acquired again. If the terahertz image mainly contains the information of the concealed object 106a that matches the shape data 1210 by the determination in step S1203, the shooting is terminated. [0048] If there is no concealed object 106a whose terahertz image matches the shape data 1210 by the determination in step S1203, the process returns to step S1201); receive an identity of a best generated image ([0048] Then, another condition, for example, condition 1232 is selected as the shooting condition, and the image data is acquired again. If the terahertz image mainly contains the information of the concealed object 106a that matches the shape data 1210 by the determination in step S1203, the shooting is terminated). Itsuji does not teach receive the angles and distances between the anomaly and the camera; create metadata from the angles and distances between the anomaly and the camera; and a graphical user interface configured to display modeling parameters used for the best generated image. DeAngelus, in the same field of endeavor of detecting concealed objects, teaches receive the angles and distances between the anomaly and the camera ([0173] By pairing a collocated RF imaging sensor with a RGB/RGB-D camera, or sensor, it is possible to extract information about people and objects in the scene using computer vision techniques. [0120] a test subject, with or without an object on their person, moves throughout the field of view of the RF imaging sensor, and can stop at various positions within the field of view, and the RF imaging sensor can collect RF images in the snapshot mode of operation. Positions can be defined as a lateral offset from the center of the field of view (e.g., +1 foot, +2 feet), and range, or distance between the subject and/or object in the scene, to the RF imaging sensor (e.g., 4 feet, 6, feet, 8 feet). The angle of the individual with respect to the RF imaging sensor can also be varied (e.g., the dorsal side of the individual is facing the RF imaging sensor, the ventral side of the individual is facing the RF imaging sensor, or the individual is at an angle between facing the RF imaging sensor or facing away from the RF imaging sensor); create metadata from the angles and distances between the anomaly and the camera ([0055] FIG. 1B illustrates a process flow 100B illustrating fusing radio frequency (RF) signal, or volumetric data, with color or depth image data (e.g., red, green, and blue (RGB) data; or red, green, blue depth (RGB-D) data) by an embodiment of the imaging sensor system. [0056] FIG. 1B includes data blocks 151, 153, and 105…As an example, RGB/RGB-D data from data block 151 can represent data associated with a RGB or RGB-D images captured by one or more cameras of the imaging sensor system. [0057] ARF volumetric data from data block 153 can represent imaging data corresponding to RF signals generated by an RF imaging sensors. For instance, the imaging data can include one or more three-dimensional (3-D) (RF-based) images, and/or information about the one or more 3-D images); and a graphical user interface configured to display modeling parameters used for the best generated image ([0028] FIG. 13 illustrates a ground truth labeling tool graphical user interface of an imaging sensor system. [0084] The visual products, can include, but are not limited to a visual display of a segmented portion of a RGB/RGB-D image according to different portions of the body, and/or a visual display of a segmented portion of a detected object in a RF image, and/or the display of an anomaly detected in the RF image. [0053] Computer(s) 105 can ingest the RF image from RF imaging system 101, via connection 193, along with the one or more RGB images and the one or more RGB-D images from Camera Imaging System 103, via connection 192, and generate a composite image of the RF image and the one or more RGB images and the one or more RGB-D images. [0235] The automated alerts 2406 include a RF image 2422 and a bounding box 2432 around a detected object of interest. In this case the object is a thermos. One or more of the processors can characterize the detected object based on the size, shape, material, and/or any other parameter associated with objects, and compare the parameters to other objects with the same or similar parameters and alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat. [0085] A processor can also cause a visual alert to be displayed and/or an auditory alert to be sounded in response to a processor comparing the detected object to known dangerous or lethal objects, and determining that the object is a weapon that can be used in a dangerous or lethal manner). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the interface of Itsuji a with the teachings of DeAngelus to determine angles of an anomaly to a camera because "a subject… can stop at various positions within the field of view" [DeAngelus 0120], to and output the image and metadata because "Aligning the RGB/RGB-D image with the RF volumetric image ensures that when a concealed object of interest is detected in an image generated by the RF imaging sensors, a user viewing a fused RGB/RGB-D data and the RF imaging sensor data can accurately determine where the concealed object of interest is located on the body of the person in the RGB/RGB-D image" [DeAngelus 0067], and to use a graphical user interface to display the image to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Itsuji does not teach output the metadata to a scenario modeler that generates a simulation of the anomaly interacting with a physical environment simulated by a 3D rendering engine of the scenario modeler according to the metadata; output the image to the scenario modeler; receive generated images from the scenario modeler, wherein the generated images each comprise a different scenario of an object under clothing in a physical environment. Sun, in the same field of endeavor of hidden object detection and rendering, teaches output the metadata to a scenario modeler that generates a simulation of the anomaly interacting with a physical environment simulated by a 3D rendering engine of the scenario modeler according to the metadata (See Sun Fig. 1 above. [Conclusions pg. 453-454] A photometric stereo based strategy is proposed for the detection of concealed objects as an add-on function to current monitoring or surveillance systems. The new strategy has the capability to separate the surface reflectance or albedo to reveal true 3D shape information. The combined effect is to suppress surface colour and to enhance or make more apparent the 3D quality of the surface. The approach is low cost, needing only the addition of some of extra illumination. By rendering shaded images free from albedo effects, subtle geometrical details become visible as the resultant virtual images are related only to the surface normal information. The direction and intensity of the synthetic illumination used in the virtual image can be altered interactively allowing optimal results to be achieved); output the image to the scenario modeler (Fig. 1(a)); receive generated images from the scenario modeler, wherein the generated images each comprise a different scenario of an object under clothing in a physical environment (Fig. 1(b-d) [pg. 448] Figure 1 shows an example in which a toy gun, a knife and a grenade are hidden under a textured cloth. From the acquired luminance image (figure 1(a)) which represents that perceived by the human eye, or conventional surveillance camera, it is not easy to identify the objects hidden under the cloth. However, they become significantly more obvious in the synthetic rendered images as shown in figure 1(b) and 1(c), which correspond to different illumination setting. So in addition to monitoring conventional images acquired from a camera(s) directly, the rendered images provide an additional very different modality of data which can reveal details difficult to find using other normal approaches). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the interface of Itsuji a with the teachings of Sun to generate images of the anomaly with 3D rendering because "The concealed object can be metallic and plastic or ceramic in the form of handguns and knives or explosives, which may not be detectable with any one of the sensing modalities mentioned above" [Sun pg. 446 para. 5]. Regarding claim 9, Itsuji, DeAngelus, and Sun teach the interface of claim 8. DeAngelus teaches wherein the graphical user interface is additionally configured to display the image of the anomaly and the best generated image ([0028] FIG. 13 illustrates a ground truth labeling tool graphical user interface of an imaging sensor system. [0053] Computer(s) 105 can ingest the RF image from RF imaging system 101, via connection 193, along with the one or more RGB images and the one or more RGB-D images from Camera Imaging System 103, via connection 192, and generate a composite image of the RF image and the one or more RGB images and the one or more RGB-D images. [0235] The automated alerts 2406 include a RF image 2422 and a bounding box 2432 around a detected object of interest. In this case the object is a thermos. One or more of the processors can characterize the detected object based on the size, shape, material, and/or any other parameter associated with objects, and compare the parameters to other objects with the same or similar parameters and alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the interface of Itsuji a with the teachings of DeAngelus to use a graphical user interface to display the images to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Regarding claim 10, Itsuji, DeAngelus, and Sun teach the interface of claim 8. DeAngelus teaches wherein the modeling parameters comprise a type of object and a type of clothing ([0046] FIG. 24 illustrates different categories of output data products in the form of visual products or automated alert products, in accordance with exemplary embodiments of the present disclosure. [0239] In some embodiments, instead of one or more of the processors just detecting a concealed object of interest, one or more of the processors can segment portions of the scene of the RF image (scene segmentation 2501) by executing one or more computer executable instructions that cause the RF object detection architecture instruction set 2511 to segment the RF image into portions that include a person, or person of interest, luggage such as a backpack or roller bag, a random item in the scene such a box, or an item of clothing such as a coat. One or more of the processors can apply a segmentation mask around each of the segmented portions of the RF image. [0235] Automated alerts 2406 include characterization 2402 alerts and threat detection 2404 alerts. The automated alerts 2406 include a RF image 2422 and a bounding box 2432 around a detected object of interest. In this case the object is a thermos. One or more of the processors can characterize the detected object based on the size, shape, material, and/or any other parameter associated with objects, and compare the parameters to other objects with the same or similar parameters and alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat). PNG media_image4.png 649 771 media_image4.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the interface of Itsuji a with the teachings of DeAngelus to display parameters corresponding to a type of object under a type of clothing to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Regarding claim 11, Itsuji teaches a method ([0124] FIG. 15 is a flow diagram illustrating an operation flow of the present embodiment. The present embodiment is characterized by the method of operation of the illumination unit 101) comprising the steps of: receiving an image of an anomaly from a camera ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. [0014] In the present embodiment, it is assumed that the subject 106 is a person, and a concealed object 106a such as a gun or a knife is hidden under the covering 106b which is clothing); receiving angles and distances between the anomaly ([0038] As shown in FIG. 1, the terahertz wave camera system may have a visible camera 109, and the image processing unit 105 determines the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. The visible image 944 includes shape information. [0123] Depending on the surface shape of the subject, the change in the angle of the surface is complicated and differs for each subject. Therefore, the preferable positional relationship between the illumination unit, which is a light source, and the detection unit may change for each subject. Further, when the subject moves, the positional relationship between the illumination unit, which is a light source, and the detection unit may change from moment to moment); sending the generated images to a comparison engine; sending the image of the anomaly to the comparison engine ([0101] the image processing unit 105 can also determine the shape of the subject 106 by comparing the terahertz image with the visible image from the visible camera 109. The visible image 944 can also be called shape data. [0048] Then, another condition, for example, condition 1232 is selected as the shooting condition, and the image data is acquired again. If the terahertz image mainly contains the information of the concealed object 106a that matches the shape data 1210 by the determination in step S1203, the shooting is terminated. [0048] If there is no concealed object 106a whose terahertz image matches the shape data 1210 by the determination in step S1203, the process returns to step S1201); receiving an identity of a best generated image ([0048] Then, another condition, for example, condition 1232 is selected as the shooting condition, and the image data is acquired again. If the terahertz image mainly contains the information of the concealed object 106a that matches the shape data 1210 by the determination in step S1203, the shooting is terminated). Itsuji does not teach receiving angles and distances between the anomaly and the camera; creating metadata from the angles and distances between the anomaly and the camera; and displaying modeling parameters used for the best generated image. DeAngelus, in the same field of endeavor of detecting concealed objects, teaches receiving angles and distances between the anomaly and the camera ([0173] By pairing a collocated RF imaging sensor with a RGB/RGB-D camera, or sensor, it is possible to extract information about people and objects in the scene using computer vision techniques. [0120] a test subject, with or without an object on their person, moves throughout the field of view of the RF imaging sensor, and can stop at various positions within the field of view, and the RF imaging sensor can collect RF images in the snapshot mode of operation. Positions can be defined as a lateral offset from the center of the field of view (e.g., +1 foot, +2 feet), and range, or distance between the subject and/or object in the scene, to the RF imaging sensor (e.g., 4 feet, 6, feet, 8 feet). The angle of the individual with respect to the RF imaging sensor can also be varied (e.g., the dorsal side of the individual is facing the RF imaging sensor, the ventral side of the individual is facing the RF imaging sensor, or the individual is at an angle between facing the RF imaging sensor or facing away from the RF imaging sensor); creating metadata from the angles and distances between the anomaly and the camera ([0055] FIG. 1B illustrates a process flow 100B illustrating fusing radio frequency (RF) signal, or volumetric data, with color or depth image data (e.g., red, green, and blue (RGB) data; or red, green, blue depth (RGB-D) data) by an embodiment of the imaging sensor system. [0056] FIG. 1B includes data blocks 151, 153, and 105…As an example, RGB/RGB-D data from data block 151 can represent data associated with a RGB or RGB-D images captured by one or more cameras of the imaging sensor system. [0057] ARF volumetric data from data block 153 can represent imaging data corresponding to RF signals generated by an RF imaging sensors. For instance, the imaging data can include one or more three-dimensional (3-D) (RF-based) images, and/or information about the one or more 3-D images); and displaying modeling parameters used for the best generated image ([0084] The visual products, can include, but are not limited to a visual display of a segmented portion of a RGB/RGB-D image according to different portions of the body, and/or a visual display of a segmented portion of a detected object in a RF image, and/or the display of an anomaly detected in the RF image. [0053] Computer(s) 105 can ingest the RF image from RF imaging system 101, via connection 193, along with the one or more RGB images and the one or more RGB-D images from Camera Imaging System 103, via connection 192, and generate a composite image of the RF image and the one or more RGB images and the one or more RGB-D images. [0235] The automated alerts 2406 include a RF image 2422 and a bounding box 2432 around a detected object of interest. In this case the object is a thermos. One or more of the processors can characterize the detected object based on the size, shape, material, and/or any other parameter associated with objects, and compare the parameters to other objects with the same or similar parameters and alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat. [0085] A processor can also cause a visual alert to be displayed and/or an auditory alert to be sounded in response to a processor comparing the detected object to known dangerous or lethal objects, and determining that the object is a weapon that can be used in a dangerous or lethal manner). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of DeAngelus to determine angles of an anomaly to a camera because "a subject… can stop at various positions within the field of view" [DeAngelus 0120], to and output the image and metadata because "Aligning the RGB/RGB-D image with the RF volumetric image ensures that when a concealed object of interest is detected in an image generated by the RF imaging sensors, a user viewing a fused RGB/RGB-D data and the RF imaging sensor data can accurately determine where the concealed object of interest is located on the body of the person in the RGB/RGB-D image" [DeAngelus 0067], and to display the image to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Itsuji does not teach sending the metadata to a scenario modeler that generates a simulation of the anomaly interacting with a physical environment simulated by a 3D rendering engine of the scenario modeler according to the metadata; receiving generated images from the scenario modeler, wherein the generated images each comprise a different scenario of an object under clothing in a physical environment. Sun, in the same field of endeavor of hidden object detection and rendering, teaches sending the metadata to a scenario modeler that generates a simulation of the anomaly interacting with a physical environment simulated by a 3D rendering engine of the scenario modeler according to the metadata (See Sun Fig. 1 above. [pg. 446 para. 5] Images from same systems but with the addition of structured illumination can recover surface normal information which can be used to produce albedo free rendered images, and detect subtle surface features caused by concealed threats, such as weapons, under the clothing or any other camouflaged cover. [Conclusions pg. 453-454] A photometric stereo based strategy is proposed for the detection of concealed objects as an add-on function to current monitoring or surveillance systems. The new strategy has the capability to separate the surface reflectance or albedo to reveal true 3D shape information. The combined effect is to suppress surface colour and to enhance or make more apparent the 3D quality of the surface. The approach is low cost, needing only the addition of some of extra illumination. By rendering shaded images free from albedo effects, subtle geometrical details become visible as the resultant virtual images are related only to the surface normal information. The direction and intensity of the synthetic illumination used in the virtual image can be altered interactively allowing optimal results to be achieved); receiving generated images from the scenario modeler, wherein the generated images each comprise a different scenario of an object under clothing in a physical environment (Fig. 1(b-d) [pg. 448] Figure 1 shows an example in which a toy gun, a knife and a grenade are hidden under a textured cloth. From the acquired luminance image (figure 1(a)) which represents that perceived by the human eye, or conventional surveillance camera, it is not easy to identify the objects hidden under the cloth. However, they become significantly more obvious in the synthetic rendered images as shown in figure 1(b) and 1(c), which correspond to different illumination setting. So in addition to monitoring conventional images acquired from a camera(s) directly, the rendered images provide an additional very different modality of data which can reveal details difficult to find using other normal approaches). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of Sun to generate a simulation of the anomaly interacting with the physical environment by rendering because "The concealed object can be metallic and plastic or ceramic in the form of handguns andknives or explosives, which may not be detectable with any one of the sensing modalities mentioned above" [Sun pg. 446 para. 5]. Regarding claim 12, Itsuji, DeAngelus, and Sun teach the method of claim 11. DeAngelus teaches displaying the image of the anomaly and the best generated image ([0046] FIG. 24 illustrates different categories of output data products in the form of visual products or automated alert products, in accordance with exemplary embodiments of the present disclosure. [0084] The visual products, can include, but are not limited to a visual display of a segmented portion of a RGB/RGB-D image according to different portions of the body, and/or a visual display of a segmented portion of a detected object in a RF image, and/or the display of an anomaly detected in the RF image). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of DeAngelus to display the image of the anomaly and generated image to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Regarding claim 13, Itsuji, DeAngelus, and Sun teach the method of claim 11. DeAngelus teaches wherein the modeling parameters comprises a particular item under a particular type of clothing ([0046] FIG. 24 illustrates different categories of output data products in the form of visual products or automated alert products, in accordance with exemplary embodiments of the present disclosure. [0239] In some embodiments, instead of one or more of the processors just detecting a concealed object of interest, one or more of the processors can segment portions of the scene of the RF image (scene segmentation 2501) by executing one or more computer executable instructions that cause the RF object detection architecture instruction set 2511 to segment the RF image into portions that include a person, or person of interest, luggage such as a backpack or roller bag, a random item in the scene such a box, or an item of clothing such as a coat. One or more of the processors can apply a segmentation mask around each of the segmented portions of the RF image. [0235] Automated alerts 2406 include characterization 2402 alerts and threat detection 2404 alerts. The automated alerts 2406 include a RF image 2422 and a bounding box 2432 around a detected object of interest. In this case the object is a thermos. One or more of the processors can characterize the detected object based on the size, shape, material, and/or any other parameter associated with objects, and compare the parameters to other objects with the same or similar parameters and alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Itsuji a with the teachings of DeAngelus to display parameters corresponding to a particular item under a particular type of clothing to "alert an operator of the imaging sensor system about the possibility of an object that could be a potential threat" [DeAngelus 0235]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Nov 17, 2022
Application Filed
May 06, 2025
Non-Final Rejection — §103
Aug 01, 2025
Interview Requested
Aug 12, 2025
Examiner Interview Summary
Aug 12, 2025
Applicant Interview (Telephonic)
Aug 13, 2025
Response Filed
Sep 16, 2025
Final Rejection — §103
Dec 17, 2025
Request for Continued Examination
Jan 16, 2026
Response after Non-Final Action
Mar 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month