Prosecution Insights
Last updated: April 19, 2026
Application No. 18/666,041

FREIGHT CONTAINER IDENTIFICATION MARK LOCATOR SYSTEM AND METHOD THEREOF

Non-Final OA §102
Filed
May 16, 2024
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Ironyun Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on May 31, 2024 and are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. The information disclosure statement filed May 19, 2025 fails to comply with the provisions of 37 CFR 1.98(a)(4) because it lacks the appropriate size fee assertion. It has been placed in the application file, but the information referred to therein has not been considered as to the merits. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2 and 7-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by C. -S. Fahn, B. -Y. Su and M. -L. Wu, "Vision-based identification extraction for freight containers," 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Guangzhou, China, 2015, pp. 51-57, doi: 10.1109/ICWAPR.2015.7295925. (hereinafter Fahn). Regarding independent claim 1, Fahn discloses A freight container identification mark locator method (abstract, “Based on computer vision techniques, a fast and efficient method to extract the identification from freight containers is presented in this paper.”), executed by a processor unit (section 4.1 experiment set up, “The computer used in the experiments is equipped with the Intel(R) Core(TM) i7-4800MQ2.70GHz CPU with 8.00GB RAM. The operating system is Microsoft Windows 8.1, and the identification extraction system is implemented by the Microsoft Visual C# 2013 programming language.”), and comprising the following steps: receiving an image having a freight container therein with a freight container identification mark from a camera unit (section 3.1 image pre-processing, “The input images produced by a camera;” section 3.1 image pre-processing, “The image of a freight container can be segmented into three parts, which are the background, freight container, and texts.”); identifying a first reference point and a second reference point from the freight container in the image (section 3.1 image pre-processing, “The image of a freight container can be segmented into three parts, which are the background, freight container, and texts. That is setting k=3 to segment the image into three parts in the light of its intensity. Next, the areas of the three parts are calculated.;” the container coordinates are read as the reference points; to calculate area coordinates must be present); obtaining a first coordinate of the first reference point and a second coordinate of the second reference point according to a coordinate model (section 3.1 image pre-processing, “The image of a freight container can be segmented into three parts, which are the background, freight container, and texts. That is setting k=3 to segment the image into three parts in the light of its intensity. Next, the areas of the three parts are calculated.;” the container coordinates are read as the reference points; to calculate area coordinates must be present); and calculating a representative coordinate for a specified area in the image according to the first coordinate and the second coordinate (section 3.1 image pre-processing, “the areas of the three parts are calculated. We choose the segment with a less area as a text region, because the characters printed on a freight container occupy the least pixel counts within the image;” the coordinates of the areas are used to calculate area of the three regions, then the text region is determined as having the smallest area, thus the coordinates of the text region are then read as the representative coordinate for a specified area; further, since the areas of the 3 regions are determined using the coordinates, then the determination of the text region is based on the coordinates of the region segmented as the freight container (since there is a comparison between areas to determine the smallest one as the text region)), and outputting the representative coordinate (section 3.1 image pre-processing, “the areas of the three parts are calculated. We choose the segment with a less area as a text region, because the characters printed on a freight container occupy the least pixel counts within the image;” the text region is output for further processing to determine the characters); wherein the specified area comprises the freight container identification mark (section 3.1 image pre-processing, “the areas of the three parts are calculated. We choose the segment with a less area as a text region, because the characters printed on a freight container occupy the least pixel counts within the image;”). Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, Fahn further discloses wherein the first reference point and the second reference point respectively correspond to a top left corner casting and a top right corner casting of the freight container (section 3.1 image pre-processing, “The image of a freight container can be segmented into three parts, which are the background, freight container, and texts. That is setting k=3 to segment the image into three parts in the light of its intensity. Next, the areas of the three parts are calculated.;” the freight container is read as determining the corners where the castings would be located (see also Figure 4)). Regarding independent claim 7, the rejection of claim 1 applies directly. Additionally, Fahn discloses A freight container identification mark locator system (abstract, “Based on computer vision techniques, a fast and efficient method to extract the identification from freight containers is presented in this paper.”), comprising: a memory unit, storing a coordinate model (section 4.1 experiment set up, “The computer used in the experiments is equipped with the Intel(R) Core(TM) i7-4800MQ2.70GHz CPU with 8.00GB RAM. The operating system is Microsoft Windows 8.1, and the identification extraction system is implemented by the Microsoft Visual C# 2013 programming language.”); a camera unit, configured to capture an image having a freight container therein with a freight container identification mark (section 3.1 image pre-processing, “The input images produced by a camera;” see also Figure 4); and a processor unit, connected to the camera unit and the memory unit (section 4.1 experiment set up, “The computer used in the experiments is equipped with the Intel(R) Core(TM) i7-4800MQ2.70GHz CPU with 8.00GB RAM. The operating system is Microsoft Windows 8.1, and the identification extraction system is implemented by the Microsoft Visual C# 2013 programming language.”); wherein the processor unit is configured to: receive the image from the camera unit (section 3.1 image pre-processing, “The input images produced by a camera”); identify a first reference point and a second reference point from the freight container in the image (section 3.1 image pre-processing, “The image of a freight container can be segmented into three parts, which are the background, freight container, and texts. That is setting k=3 to segment the image into three parts in the light of its intensity. Next, the areas of the three parts are calculated.;” the container coordinates are read as the reference points; to calculate area coordinates must be present); obtain a first coordinate of the first reference point and a second coordinate of the second reference point according to the coordinate model (section 3.1 image pre-processing, “The image of a freight container can be segmented into three parts, which are the background, freight container, and texts. That is setting k=3 to segment the image into three parts in the light of its intensity. Next, the areas of the three parts are calculated.;” the container coordinates are read as the reference points; to calculate area coordinates must be present); and calculate a representative coordinate for a specified area in the image according to the first coordinate and the second coordinate (section 3.1 image pre-processing, “the areas of the three parts are calculated. We choose the segment with a less area as a text region, because the characters printed on a freight container occupy the least pixel counts within the image;” the coordinates of the areas are used to calculate area of the three regions, then the text region is determined as having the smallest area, thus the coordinates of the text region are then read as the representative coordinate for a specified area; further, since the areas of the 3 regions are determined using the coordinates, then the determination of the text region is based on the coordinates of the region segmented as the freight container (since there is a comparison between areas to determine the smallest one as the text region)), and output the representative coordinate (section 3.1 image pre-processing, “the areas of the three parts are calculated. We choose the segment with a less area as a text region, because the characters printed on a freight container occupy the least pixel counts within the image;” the text region is output for further processing to determine the characters); wherein the specified area comprises the freight container identification mark (section 3.1 image pre-processing, “the areas of the three parts are calculated. We choose the segment with a less area as a text region, because the characters printed on a freight container occupy the least pixel counts within the image;”). Regarding dependent claim 8, the rejection of claim 7 is incorporated herein. Additionally, Fahn further discloses wherein the first reference point and the second reference point respectively correspond to a top left corner casting and a top right corner casting of the freight container (section 3.1 image pre-processing, “The image of a freight container can be segmented into three parts, which are the background, freight container, and texts. That is setting k=3 to segment the image into three parts in the light of its intensity. Next, the areas of the three parts are calculated.;” the freight container is read as determining the corners where the castings would be located (see also Figure 4)). Allowable Subject Matter Claims 3-6 and 9-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the closest prior arts of record teach methods of analyzing freight container images to detect identification or serial numbers. However, none of them alone or in any combination teaches determining predictions of width and height based on difference between the values of a different reference points, and further between an average of the points then determining a vertical point between a difference between the prediction width and whichever is higher of y values of the reference point.. The closest prior art being Fahn discloses, “a fast and efficient method to extract the identification from freight containers (abstract).” Further as noted above, Fahn discloses in section 3.1, “the image of a freight container can be segmented into three parts, which are the background, freight container, and texts.” Fahn does not disclose any form of determining predictions of width and height based on difference between the values of a different reference points, and further between an average of the points then determining a vertical point between a difference between the prediction width and whichever is higher of y values of the reference point. Another similar piece of art being U.S. Publication No. 2022/0084186 to Ivens et al. (hereinafter Ivens) discloses, “An automated inspection method and system are provided, for identifying and assessing the condition of shipping containers (abstract).” Additionally, Ivens further discloses at paragraph 0019, “In possible implementations, a virtual coordinate system based on the Container Equipment Data Exchange (CEDEX) can be built and associating coordinates with the container code and physical characteristics of the shipping container, according to said virtual coordinate system, to position said container code and/or physical characteristics in said virtual coordinate system.” However, there is no calculation of differences or averages of reference values. Further, U.S. Publication No. 2018/0315065 to Zhang et al. (hereinafter Zhang) discloses generating bounding boxes of text data using coordinate values, “The bounding box coordinates shown in Table 1 are typically indicated as pixel positions within each image; in the example above, two pixel positions are included in each record, corresponding to the top-left and bottom-right corners of each bounding box (paragraph 0030).” However, there is no calculation of differences or averages of reference values. Thus, Fahn, Ivens and Zhang either alone or in the combination fail to disclose determining predictions of width and height based on difference between the values of a different reference points, and further between an average of the points then determining a vertical point between a difference between the prediction width and whichever is higher of y values of the reference point. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: CN113569829A discloses, “a container code data identification method and system, wherein the method (abstract)” CN115690807A discloses, “The invention belongs to the field of automatic container identification equipment, and discloses a container number identification method based on OCR technology (abstract)” Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month