Prosecution Insights
Last updated: April 19, 2026
Application No. 18/691,988

COMPUTER-IMPLEMENTED SYSTEM AND METHOD FOR DETECTING PRESENCE AND INTACTNESS OF A CONTAINER SEAL

Non-Final OA §112
Filed
Mar 14, 2024
Examiner
CATO, MIYA J
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Atai Labs Private Limited
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
89%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
513 granted / 670 resolved
+14.6% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
24 currently pending
Career history
694
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
25.8%
-14.2% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 670 resolved cases

Office Action

§112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-18 are pending in this application. Drawings The drawings received on 3/14/2024 are accepted for examination purposes. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in India on 9/15/2021. It is noted, however, that applicant has not filed a certified copy of the IN202141041657 application as required by 37 CFR 1.55. Claim Objections Claim 10 is objected to because of the following informalities: Lines 5-6 recite ‘detect the motion of the vehicle by a motion detection module’, but lacks antecedent basis for “the motion” and “the vehicle”. Examiner treats as ‘detect a motion of a vehicle by a motion detection module’. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Lines 7-8 recite ‘the one or more consecutive frames’, but lacks antecedent basis. Examiner treats as ‘one or more consecutive frames’. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 8 recites ‘a motion detection module’, where line 6 already recites ‘a motion detection module’. Examiner treats line 8 as ‘the motion detection module’. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 11 recites ‘the pre-processing module’, but lacks antecedent basis. Examiner treats as ‘a pre-processing module’. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 16 recites ‘the one or more lock images’, but lacks antecedent basis. Examiner treats as ‘one or more lock images’. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 16 recites ‘the seal classification module’, but lacks antecedent basis. Examiner treats as ‘a seal classification module’. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 32 recites ‘the seal detection module’, but lacks antecedent basis. Examiner treats as ‘a seal detection module’. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “seal detection module”, “pre-processing module”, “motion detection module”, “lock detection module”, “visual object detection module”, “seal classification module” and “post-processing module” in claims 1-9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Allowable Subject Matter Claims 1-9 are allowed. Claims 10-18 would be allowable if rewritten or amended to overcome the objections, set forth in this Office action and/or objections for lacking antecedent basis. Applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with, as no art rejection has been indicated. See 37 CFR 1.111(b) and MPEP § 707.07(a). The following is a statement of reasons for the indication of allowable subject matter: Hofman (US-2015/0063634) in view of Weinstein et al. (US-2020/0049632), Li et al. (US-2022/0214243) and further in view of the prior art searched and/or cited does not teach nor render obvious the combination of limitations including “A system for detecting presence and intactness of one or more seals on a container, comprising: a first camera, a second camera, and a third camera configured to detect motion of a vehicle and enable to capture a first camera feed, a second camera feed, and a third camera feed, and deliver the first camera feed, the second camera feed and the third camera feed to a computing device over a network, whereby the computing device comprising a seal detection module configured to detect presence and intactness of one or more seals on a container using an activation map; a pre-processing module comprising a motion detection module configured to receive the third camera feed as an input to detect the motion of a vehicle, the motion detection module configured to compare a selected region of interest from the one or more consecutive frames of the third camera to detect motion of the vehicle using a frame difference, the pre-processing module configured to save one or more consecutive frames from the first camera and the second camera when the vehicle starts crossing the third camera, whereby the frame difference is computed using one or more computer vision methods, the third camera configured to detect motion of the vehicle, the third camera is positioned perpendicular to the container passing through a vehicle lane, the first camera is positioned front side to the container passing through the vehicle lane and the second camera is positioned rear side to the container passing through the vehicle lane; a lock detection module comprising a visual object detection module configured to receive the one or more saved frames from the pre-processing module as the input and detect one or more locks present in the one or more saved frames of the first camera and the second camera, the lock detection module configured to detect the presence of the one or more locks and transmit the one or more lock images to a seal classification module; whereby the seal classification module configured to receive the one or more lock images from the lock detection module as the input and classify the one or more lock images to identify whether the one or more locks are sealed, the seal classification module configured to determine a color of the one or more container seals by extracting an attention region and observing one or more pixel values in the extracted region using the activation map of a classification model and histograms, the seal classification module configured to determine intactness of the one or more container seals by extracting an attention region and observing one or more pixel values in the extracted region, the seal classification module configured to determine the color and the seal intactness from the one or more lock images by generating one or more attention maps, the one or more attention maps are used to obtain better localization of the seal, the seal classification module comprising a computer vision and neural network methods configured to determine the color and the seal intactness on obtaining the exact location of the seal; the seal classification module configured to pass seal information to a post- processing module as a JavaScript Object Notation (json) file with a frame number; the post-processing module configured to receive the JavaScript Object Notation (JSON) files corresponding to the container and tracks at least one seal separately using a DeepSort tracking model thereby generating a final output by considering an averaged result over the one or more lock images; and a cloud server configured to receive a final output from the seal detection module over the network and updates the final output obtained by the seal detection module on the cloud server, the final output comprising number of seals identified on the one or more locks of the container” as recited in independent claim 1; and “A method for detecting presence and intactness of one or more seals on a container, comprising: enabling a first camera, a second camera, and a third camera to capture a first camera feed, a second camera feed, and a third camera feed; receiving the third camera feed as an input to detect the motion of the vehicle by a motion detection module on a computing device; comparing a selected region of interest from the one or more consecutive frames by a motion detection module to detect motion of the vehicle using a frame difference; saving one or more consecutive frames from the first camera and the second camera by the pre-processing module when the vehicle starts crossing the third camera; receiving the one or more saved frames by a lock detection module from the pre-processing module as an input and detecting one or more locks present in the one or more saved frames of the first camera and the second camera; receiving the one or more lock images by the seal classification module from the lock detection module as an input and classifying the one or more lock images to identify whether the one or more locks are sealed; determining a color of the one or more seals by extracting an attention region and observing one or more pixel values in the extracted region by the seal classification module; determining intactness of the one or more seals by extracting an attention region and observing one or more pixel values in the extracted region by the seal classification module; passing the seal information to a post-processing module as a JavaScript Object Notation (json) file with a frame number; receiving the JavaScript Object Notation (json) files corresponding to the container by the post-processing module and tracking each seal separately using a DeepSort tracking model; generating a final output by considering an averaged result over the one or more lock images; and updating the final output obtained by the seal detection module on a cloud server over a network, the final output comprising number of seals identified on the one or more locks of the container” as recited in independent claim 10. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIYA J CATO whose telephone number is (571)270-3954. The examiner can normally be reached M-F, 830-530. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 571.270.3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIYA J CATO/Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

Mar 14, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597127
SYSTEMS AND METHODS FOR AUTOMATED IDENTIFICATION AND CLASSIFICATION OF LESIONS IN LOCAL LYMPH AND DISTANT METASTASES
2y 5m to grant Granted Apr 07, 2026
Patent 12586415
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND INFORMATION TERMINAL TO ASSIST USER LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12586673
SYSTEMS AND METHODS FOR RADIATION ENTRY IN DOSE MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12575895
MIXED REALITY IMAGE GUIDANCE FOR MEDICAL INTERVENTIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12569319
COMBINED FACE SCANNING AND INTRAORAL SCANNING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
89%
With Interview (+12.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 670 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month