Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/26/2024 and 11/01/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 5-8, 12, and 15-18 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Thome et al. “A cognitive and video-based approach for multinational License Plate Recognition” Machine Vision and Applications, Springer, vol. 22 no. 2, published 01 March 2010, pages 389-407, (cited on the 11/01/2024 IDS), (hereinafter “Thome”).
With respect to claim 1, Thome et al. teaches a method for tracking an object in a dynamic scene, comprising:
capturing a video and obtaining continuous frames (Thome, sect. 5 “In this section, we explain our approach for explicitly taking advantage of the temporal aspect of the video”);
frame-by-frame recognizing one or more objects in each of the frames and choosing a target object in a first frame of the frames (Thome, sect. 5.1 “We define a tracking score STracking between a plate identified at time t and a new candidate being recognized at time t +1…”;
individually calculating one or more distances between the target object in the first frame and one or more objects recognized in a second frame of the frames so as to obtain one or more object distances (see Thome sect. 5.1 “SGTracking, corresponds to a geometrical component that defines a similarity measurement involving position and speed…” where the geometrical component comprises object distances);
recognizing strings of the target object in the first frame and the one or more objects recognized in the second frame (see Thome sect. 5.1 “SRTracking, is related to the OCR result…”);
calculating a string similarity between the string of the target object in the first frame and the string of each of the one or more objects recognized in the second frame (see Thome sect. 5.1 “SRTracking, is related to the OCR result, and corresponds to a similarity score between the string extracted from the current plate and the previously updated buffer” as further described in sect. 5.2.1);
calculating an overall score according to the object distance and the string similarity between the target object in the first frame and each of the one or more objects recognized in the second frame (see Thome sect. 5.1: “ We define a tracking score STracking between a plate identified at time t and a new candidate being recognized at time t +1:
SGTracking = wG × SGTracking + wR × SRTracking
wG and wR are parameters that have been learnt from training data, using across-validation procedure.”; and
determining whether or not the target object in the first frame is any object appearing in the second frame according to the overall score (see Thome sect. 5.1 “The global tracking score STracking defined in Eq. 3 is matched against a given threshold to define vehicles entries and outcomes.”).
With respect to claim 5, Thome teaches the method according to claim 1, wherein the one or more objects to be frame-by-frame recognized are one or more license plates and the one or more license plates are one-by-one numbered (see Thome, sect. 5.1).
With respect to claim 6, Thome teaches the method according to claim 1, wherein, when the overall score between the target object in the first frame and each of the one or more objects recognized in the second frame is obtained, the overall score is compared with a score threshold so as to determine whether or not the target object in the first frame appears as any object of the one or more objects recognized in the second frame according to a comparison result (see Thome sect. 5.1 “The global tracking score STracking defined in Eq. 3 is matched against a given threshold to define vehicles entries and outcomes.”).
With respect to claim 7, Thome teaches the method according to claim 6, wherein, when it is determined that the target object appears in multiple continuous frames of the frames based on the overall score, a unique identifier is assigned to the target object in the continuous frames for tracking the target object (see Thome sect. 5.1 “At each time step, we define a binary matrix that contains the tracking score STracking between each detected plate and each previously tracked vehicle. Solving for best correspondences in this matrix makes it possible to track an unspecified number of vehicles.”).
With respect to claim 8, Thome teaches the method according to claim 6, wherein, the object distance between the target object in the first frame and each of the one or more objects recognized in the second frame is assigned with a distance score, the string similarity between the string of the target object in the first frame and the string of each of the one or more objects in the second frame is assigned with a string-similarity score, and the distance score and the string-similarity score are used to calculate the overall score (see Thome sect. 5.1. “ We define a tracking score STracking between a plate identified at time t and a new candidate being recognized at time t +1:
SGTracking = wG × SGTracking + wR × SRTracking
wG and wR are parameters that have been learnt from training data, using across-validation procedure.”
Independent claim 12 has been analyzed and is rejected for the reasons set forth in claim 1 above. The additional hardware components of system claim 12 are disclosed in Thome sect. 7.3.
Claim 15 has been analyzed and is rejected for the reasons set forth in claim 5 above.
Claim 16 has been analyzed and is rejected for the reasons set forth in claim 5 above.
Claim 17 has been analyzed and is rejected for the reasons set forth in claims 6 and 7 above.
Claim 18 has been analyzed and is rejected for the reasons set forth in claim 8 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-4, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Thome et al. “A cognitive and video-based approach for multinational License Plate Recognition” Machine Vision and Applications, Springer, vol. 22 no. 2, published 01 March 2010, pages 389-407, (cited on the 11/01/2024 IDS), (hereinafter “Thome”) in view of Gupta et al. U.S. Patent No. 11,978,267 (hereinafter “Gupta”).
With respect to claim 2, Thome teaches the method according to claim 1, but is silent on the remaining limitations. In the similar field of video-base license plate recognition Gupta et al. further teaches: wherein the object distance is a distance between a geometric center of the target object in the first frame and the geometric center of each of the one or more objects recognized in the second frame (see Gupta col. 5 lines 30-59, where the center of a bounding box representing the license plates to be tracked is defined and compared across video frames). Both Gupta and Thome are similarly drawn to solving the problem of accurately tracking multiple license plates across a sequence of video frames. One of ordinary skill in the art at the time the invention was filed would have found it obvious to combine Gupta’s use of the detected center of a bounding box of the license plate candidate with Thome’s tracking system to improve the reliability of license plate tracking across multiple frames.
With respect to claim 3, Thome teaches the method according to claim 2, wherein the string of the target object in the first frame and the string of each of the one or more objects recognized in the second frame are license-plate numbers, each of which is a combination of at least one of multiple English alphabetic letters or multiple numbers; and the string similarity between the strings of two objects is an edit distance (see Thome, sect. 5.2.1. “…our approach is based on the Levenstein distance, or Edit Distance (ED), to measure the similarity between strings.”).
With respect to claim 4, Thome teaches the method according to claim 3, wherein the edit distance is a Levenshtein distance that is a minimum number of single-character edits that is necessary to convert one string into another string between the strings of two objects (see Thome, sect. 5.2.1. “…our approach is based on the Levenstein distance, or Edit Distance (ED), to measure the similarity between strings.”).
Claim 13 has been analyzed and is rejected for the reasons set forth in claim 2 above.
Claim 14 has been analyzed and is rejected for the reasons set forth in claim 3 above.
Allowable Subject Matter
Claims 9-11, and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Stephen R Koziol whose telephone number is (408)918-7630. The examiner can normally be reached M-F 8 AM - 4 PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665