Prosecution Insights
Last updated: April 19, 2026
Application No. 18/282,101

METHOD FOR INSPECTING ITEMS OF LUGGAGE IN ORDER TO DETECT OBJECTS

Non-Final OA §101§102§103§112
Filed
Mar 26, 2024
Examiner
CODRINGTON, SHANE WRENSFORD
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Smiths Detection Germany GmbH
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
15
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Preliminary Amendment A preliminary amendment was filed on April 11th 2024 and has been entered and acknowledged. Claims 1, 10, 11, 12, and 13 have been amended. Claim 15 has been canceled. Claims 1-14, 16 and 17 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/14/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 14 objected to because of the following informalities: Claim 14 recites evaluating a “first inspection image (II1)” and then evaluating a “first inspection image (II2)”. This may be a typo. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: "volume generation module" , "image generation module", "evaluation module", in claim 14. and “scanning module” in claim 16 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because “computer program product” is not limited to a non-transitory storage medium therefore encompasses a transitory signal which is non statutory subject matter. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear in claim 1 if the neural network for detecting an object in the first inspection image and second inspection image are the same neural network or differing neural networks. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-9, 11-14, 16 and 17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Chen et al (Chen hereinafter US 20140185923 A1) As per claim 1 Chen teaches a method for checking items of luggage (L) in order to detect objects (Figure 4) comprising: generating a three-dimensional inspection volume (Figure 4 box S42) generating a two-dimensional first inspection image (II1) from the three-dimensional inspection volume along a first projection direction (Paragraph [0052] “After the rotation, the vertices are organized into a 3D surface for surface rendering, and the 3D model is observed vertically from top, thereby obtaining a 2D depth projection image I.sub.0.”), generating at least one two-dimensional second inspection image (II2) from the three-dimensional inspection volume along a second projection direction (PD2) which differs from the first projection direction (Figure 4 label S43), evaluating the first inspection image (II1) in order to detect objects (O) by means of a neural network (NN) (Chen calculates (evaluates) metrics from the generated depth projection images including a metric of probability (Fig 10 S104 Fig 4 S44) and a metric of symmetry for each of the first-fourth depth projection images (Fig 4 S45, Fig 10 S105) and then generates a shape feature parameter based at least on those metrics (fig 4 S46 Fig 10 S106). Chen then determines whether the object is suspicious based on the generated shape feature parameter (S107). Chen further teaches that the classifier used to classify the object based on the shape feature parameter may be a neural network Paragraph [0074-0077]. Therefore Chen’s “evaluating” of first inspection image includes computing image derived metrics and features from the image and using a classifier (a neural network) based on those features to detect and classify the object.), evaluating at least one second inspection image (II) in order to detect objects (O) by means of a neural network (NN) (Chen calculates a metric of symmetry for each of the first, second, third and fourth depth projection images (S45/S105). The shape feature parameter is generated based at least on metrics of symmetry of the first to fourth depth projection images (S46/S106), inspection image 2 contributes to the evaluation results used by the classifier. Therefore, the evaluation of inspection image 2 is also “by means of a neural network” because the classifier (which may be a neural network) operates on the shape feature parameter generated from the first-fourth image) and outputting the result of the evaluation steps. (Paragraph [0033} “The computer data processor 60 is configured to process data collected by the data collector, reconstruct the data and output results.) As per claim 2 Chen teaches the three-dimensional inspection volume (IV) is generated on the basis of a plurality of two-dimensional inspection scans (Paragraph (006) “generating, from the slice data, 3-dimensional (3D) volume data of at least one object in the luggage; calculating,”) As per claim 3 Chen teaches wherein the result of the evaluation of the first inspection image (II1) and the result of the evaluation of the second inspection image (II2) are combined to form a combined inspection result (CIR). (Figure 10 last two boxes in flowchart) As per claim 4 Chen teaches at least one additional piece of information, in particular information about a material density of an object (O), is evaluated on the basis of the three-dimensional inspection volume (IV). (Paragraph [0032] “In some embodiments, the shape feature of an object is first extracted, and then used in combination with characteristics involved in typical methods, such as atomic number and density, to achieve more efficient detection of suspicious explosives.”) As per claim 5 Chen teaches identical neural networks (NN), in particular the same neural network, are used for the evaluations of the two-dimensional inspection images (Paragraph [0076] “In an embodiment, the respective shape feature parameters in the above steps may be combined into a 15-dimensional shape feature vector F:…specific classifier is created with respect to feature vector F. The classifier, having been trained, can be used in classification and recognition of an unknown object…various types of classifiers may be used, such as…a neural network,”) As per claim 6 Paragraph [0038] “The projection data obtained by the detection and collection device 30 is stored in the computer 60 to reconstruct CT sections, and thus obtain slice data (CT slice) of the luggage 70. Then, the computer 60 executes software, for example, to extract 3D shape parameter for at least one object contained in the luggage 70 from the slice data for security inspection. According to a further embodiment, the above CT system may be a dual-energy CT system, that is, the x-ray source” Dual Energy Computed Tomography (DECT) create material luminescence images.) As per claim 7 Chen teaches the method according to claim 1, wherein the orientation of the first projection direction (PD1) and/or the second projection direction (PD2) is adjustable (Figures 7-9, Paragraph [0051] View angles for I.sub.0.about.I.sub.3 are defined with reference to a coordinate system shown in FIGS. 6, 8 and 9. Assume that the object is horizontally placed, and six (6) viewing directions are defined as view angles 1.about.6. …Further, it is possible to obtain an "aligned" model by rotating and normalizing the model, as shown in FIG. 8. … The projection may be achieved by rotating the 3D model about y-axis perpendicular to the horizontal plane until the upper and lower halves of top-view projection are most symmetric.” As per claim 8 Chen teaches (Paragraph [0072] “A typical algorithm generally extracts tens of projections, and the calculated features include moment invariants feature, Spherical Harmonic Transform coefficient, local descriptor, and Heat Kernel.”) As per claim 9 Chen teaches the steps of generating the two-dimensional inspection images are repeated at least once (Chen calculates based on the 3D volume data “a first depth projection image…and second third and fourth depth projection images” in other directions.(Paragraph [0048]), Chen also describes projecting the model at multiple angles “ by projecting at the view angles View1, View2, and View3” (Paragraph [0051]) to obtain multiple projections. Therefore Chen repeats the generation step at least once) and evaluating the generated two dimensional inspection are repeated at least once (Chen calculates “a metric of symmetry for each of the first, second, third, and fourth depth projection images” (Paragraph [0056]) As previously stated Chen uses “various types of classifier may be used such as…a neural network” Therefore Chen repeats evaluation at least once because they evaluate multiple generated projection images and does this with the neural network classifier), where in at least one of the projection directions is changed for the repetition (As previously stated Chen expressly states projections are generated at different view angles .(Figure 6, Figure 8 , Figure 9). When Chen generates the next projection image at least one projection direction is changed for the repetition). As per claim 11 Chen teaches the method according to claim 1, wherein the three-dimensional inspection volume (IV) is built up section by section. Paragraph [0029] “Then, 3D volume data of at least one object in the luggage is generated from the slice data.” And Paragraph [0089] “2D slice images may be first analyzed on a section basis. A series of 2D binary masks… may be obtained through thresholding and image segmentation… 3D "object" data across the sections may be obtained by connecting regions that are overlapping between sections and have high similarity…”) As per claim 12 Chen teaches when generating the three-dimensional inspection volume (IV), the generation is carried out with at least two different energy levels, in particular on the basis of a plurality of two-dimensional inspection scans ( Paragraph [0038] “…the above CT system may be a dual-energy CT system, that is, the x-ray source 10 in the rack 10 emits two kinds of rays of high and low energy levels, and the detection and collection device 30 detects projection data of the different energy levels…”) As per claim 13 Chen teaches an alarm is issued as a result if at least one alarm object has been detected as an object ( Paragraph [0031] “If the shape feature parameter of the object meets certain shape requirement, material recognition is performed on the object. In this way, the false alarm rate can be reduced.”) As per claim 14 Chen teaches A control device (Figure 3 label 51) for carrying out a method having the features of claim 1, comprising a volume generation module for generating a three-dimensional inspection volume ( Fig 2 label 66) an image generation module for generating a two-dimensional first inspection image (II1) from the three-dimensional inspection volume (IV) along a first projection direction (PD1) and for generating at least one two-dimensional second inspection image (II2) from the three-dimensional inspection volume (IV) along a second projection direction (PD2) which differs from the first projection direction (PD1) (Figure 1 labels 10 and 60) further comprising an evaluation module (40) for evaluating the first inspection image (II1) in order to detect objects (O) by means of a neural network (NN) and for evaluating the first inspection image (II2) in order to detect objects (O) by means of a neural network (NN) and an output module for outputting the result of the evaluation steps. (Figure 1 label 60 and figure 2 label 66) As per claim 16 Chen teaches the control device according to claim 14 wherein a scanning module is provided for the acquisition of input data, in particular in the form of two-dimensional inspection scans (IS). (Paragraph [0033]“…a detection and collection device 30. The bearing mechanism 40 bears an inspected luggage 70, and moves it to pass through a scanning region between the ray source 10 and the detection and collection device 30”…”) As per claim 17 Chen teaches A computer program product comprising commands which, when the program is run by a computer, cause it to carry out the steps of a method having the features of claim 1. (Figure 1 and Figure 2, Paragraph [0093] “some aspects of the embodiments disclosed here, in part or as a whole, may be equivalently implemented in an integrated circuit, as one or more computer programs” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et Al (Chen hereinafter US-20140185923-A1) in view of Mery et al (X-Ray Testing by Computer Vision.) Chen teaches all claim limitations previously rejected in claim 9’s rejection please see claim 9 Chen does not teach a correlation between the results of the evaluation and the change in the at least one projection direction carried out is stored for future evaluations and or changes. Mercy teaches a correlation between the results of the evaluation and the change in the at least one projection direction carried out (Figure 1, “applications on cargo inspection that employ active vision where a next best view is set according to the information of a single view” Here Mery shows evaluation information and results driving the change in view) is stored for future evaluations and/or changes. (Figure 1. The active vision mechanism described here requires storing the evaluation information long enough to drive the next view change. The relationship between the evaluation and results are integral for the “ next best view” decision) Accordingly, a person of ordinary skill in the art would have been motivated to modify Chen’s work flow with Mery’s active vison technique so that the system can adapt the next projection direction based on what the prior result indicates and retain that relationship for future iterations and inspections. This directly addresses the issue you could expect in baggage inspection where clutter and overlapping make single view interpretation unreliable. Viewpoint selection affects detectability. Mery even explains that baggage images are often “intricate…due to overlapping parts” This is why they use active vision to guide to poses where detection performance should be higher and that multiple views can confirm inspection and filter out false alarms. This modification yields a greater reduction in false alarms by choosing a more informative next projection direction when the current evaluation may be ambiguous. It also yields more objective and repeatable performance by using a systematic viewpoint selection loop and retaining the evaluation view change relationship for later uses. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE WRENSFORD CODRINGTON whose telephone number is (571)272-8130. The examiner can normally be reached 8:00am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHANE WRENSFORD CODRINGTON/Examiner, Art Unit 2667 /TOM Y LU/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month