DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/21/2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 9-11 and 15-17 rejected under 35 U.S.C. 103 as being unpatentable over Quiala et al. (“Enhancing License Plate Recognition in Videos Through Character-Wise Temporal Combination”, September 27-29, 2023) in view of Kavner (US 20020140577 A1).
Concerning claim 1, Quiala et al. (hereinafter Quiala) teaches an edge-computing method for automatic license plate recognition, which is performed in an edge device, comprising:
receiving a streaming video having continuous frames (Abstract – License Plate Recognition (LPR) in videos (sequence of frames); § 3.1 Detection and OCR Methods for LPR – combination of video frames);
frame-by-frame determining one or more license plates, and extracting an image of each of the one or more license plates in each of the continuous frames from full frame images of the continuous frames (§ 3.2. Video Frame Combination Strategies - For each car sequence, we obtain a collection of identified license plates …We conducted tests on various strategies for combining individual license plates across a given video sequence);
frame-by-frame recognizing one or more characters in each of the one or more license plates in each of the continuous frames (§ 3.1 Detection and OCR Methods for LPR – various methods for Optical Character Recognition (OCR)), calculating one or more confidence levels of the one or more characters recognized from every one of the one or more license plates, and obtaining a confidence score from the one or more confidence levels of all of the one or more characters recognized from every one of the one or more license plates (§ 3.2. Video Frame Combination Strategies - For each car sequence, we obtain a collection of identified license plates along with their respective confidences as returned by each of the pipelines described in the previous section. These identified plates may have varying levels of confidence, reflecting the OCR system’s estimation of the accuracy or reliability of the recognized characters on each plate & Table 1);
calculating the confidence score of each of the one or more license plates multiple times in the continuous frames within a period of time, and obtaining a recognition result of a license plate of the one or more license plates having a highest confidence score in one of the continuous frames (§ 3.2. Video Frame Combination Strategies & Table 1 – calculated confidence scores for a car sequence (e.g., video frames 1-10), Maximum confidence (max-conf) - The plate with the highest confidence is selected as the final result.); and
storing an image that corresponds to the license plate having the highest confidence score (§ 3.2. Video Frame Combination Strategies & Table 1 –Maximum confidence (max-conf) - The plate with the highest confidence is selected as the final result. & § 3.3 Proposed Character-Based Combination Strategy – obtaining the final plate). Not explicitly taught is storing a full frame image that corresponds to the image of the license plate having the highest confidence score.
Kavner, in the same field of endeavor, teaches a method for reading license plates, wherein a sub-image containing the license plate number is stored along with the full image (¶0081). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Kavner to the Quiala invention and allow the full frame image that corresponds to the image of the license plate having the highest confidence score to be stored. The modification of Quiala in this manner would allow portions of the image other than the license plate to be processed at a later time.
Concerning claim 2, Quiala further teaches the method of claim 1, wherein recognizing the one or more characters in each of the one or more license plates comprises recognizing the one or more characters in each of the one or more license plates by incorporating an intelligent model (§ 3.1 Detection and OCR Methods for LPR – deep-learning based detectors), and calculating the one or more confidence levels of the one or more characters recognized from every one of the one or more license plates comprises calculating the one or more confidence levels using an intelligent algorithm (§ 3.1 Detection and OCR Methods for LPR & §3.3 Proposed Character-Based Combination Strategy – Algorithm to calculate plate).
Concerning claim 3, Quiala further teaches the method of claim 2, wherein the confidence score of each of the one or more license plates is a statistical value that is calculated according to the one or more confidence levels of the one or more characters of a corresponding one of the one or more license plates (§ 3.2. Video Frame Combination Strategies & Table 1 - Confidence values).
Concerning claim 4, Quiala further teaches the method of claim 3, wherein, when the confidence score of each of the one or more license plates is calculated, a quantity of the one or more recognized characters of the corresponding one of the one or more license plates is required to exceed a character-quantity threshold; wherein, the automatic license plate recognition fails when the quantity of the one or more recognized characters of the corresponding one of the one or more license plates is lower than the character-quantity threshold (§ 3.2 Video Frame Combination Strategies (last paragraph) – Cuban plates contain a valid plate pattern that consists of 7 characters (1 letter followed by 6 digits). Frames 2, 6, 8 and 10 of Table are considered “valid”; § 3.3 Proposed Character-Based Combination Strategy – penalization of plates based on an expected character count of 7).
Concerning claim 5, Quiala further teaches the method of claim 4, wherein multiple confidence scores of the corresponding one of the one or more license plates are calculated at intervals in the continuous frames, and a frame of the continuous frames having the highest confidence score is obtained before the corresponding one of the one or more license plates disappears (§ 3.2 Video Frame Combination Strategies & Table 1- max-conf: The plate with the highest confidence is selected as the final result. & § 3.3 Proposed Character-Based Combination Strategy – obtaining the final plate).
Concerning claim 9, Quiala further teaches the method of claim 1, wherein, after the one or more characters of each of the one or more license plates in the continuous frames are recognized, a license plate tracking process is performed for:
frame-by-frame recognizing one of the one or more license plates, and selecting a target license plate in a first frame of the continuous frames (§ 3.3 Proposed Character-Based Combination Strategy – “inferred-plate”);
calculating a distance between the target license plate in the first frame and each of one or more license plates recognized in a second frame of the continuous frames individually, so as to obtain one or more license-plate distances with respect to the one or more license plates recognized in the second frame (§ 3.3 Proposed Character-Based Combination Strategy - …Step 2 computes the total weighted edit distance between each plate, enabling the identification of the plate that is most similar to others in the collection in Step 3);
recognizing strings of the target license plate in the first frame and the one or more license plates recognized in the second frame (§ 3.3 Proposed Character-Based Combination Strategy - The alignment algorithm employed in Step 4 is based on the Ratcliff/Obershelp algorithm [18], which calculates the similarity ratio between two sequences. This algorithm conducts a line-by-line comparison of the input sequences, identifying the longest contiguous matching subsequence. By employing this algorithm, we can determine the subsequences that require addition, deletion, or replacement in order to align two strings.);
individually calculating a string similarity between the string that is recognized from the target license plate in the first frame and the string that is recognized from each of the one or more license plates recognized in the second frame (§ 3.3 Proposed Character-Based Combination Strategy -The alignment algorithm employed in Step 4 is based on the Ratcliff/Obershelp algorithm [18], which calculates the similarity ratio between two sequences. This algorithm conducts a line-by-line comparison of the input sequences, identifying the longest contiguous matching subsequence.);
calculating an overall score according to the license-plate distance and the string similarity between the target license plate in the first frame and each of the one or more license plates recognized in the second frame (§ 3.3 Proposed Character-Based Combination Strategy -Then, Step 2 computes the total weighted edit distance between each plate, enabling the identification of the plate that is most similar to others in the collection in Step 3. This similarity assessment takes into account the penalization of plates based on the difference between their number of characters and the expected count of 7, favoring plates with complete character detection…The alignment algorithm employed in Step 4 is based on the Ratcliff/Obershelp algorithm [18], which calculates the similarity ratio between two sequences. This algorithm conducts a line-by-line comparison of the input sequences, identifying the longest contiguous matching subsequence. By employing this algorithm, we can determine the subsequences that require addition, deletion, or replacement in order to align two strings); and
determining whether or not the target license plate in the first frame is any of the one or more license plates recognized in the second frame according to the overall score (§ 3.3 Proposed Character-Based Combination Strategy – similarity assessment of all the plates detected in the video sequence);
wherein the target license plate appearing in the first frame and one of the one or more license plates recognized in the second frame that is determined as the target license plate according to the overall score are assigned with a same identifier, so as to perform the license plate tracking process in the continuous frames (§ 3.3 Proposed Character-Based Combination Strategy – step 5 of Algorithm 1; §4.1 Cuban License Plate Dataset – tracking the car detections).
Concerning claim 10, Quiala further teaches the method of claim 9, when license plates having the same identifier in the continuous frames have been tracked for a period of time, the full frame image having the highest confidence score is stored, or a plurality of full frame images having confidence scores that meet a threshold are stored (§ 3.2 Video Frame Combination Strategies & Table 1 – out of the 10 available video frames, frames 2, 6, 8 and 10 are considered for further processing).
Concerning claim 11, Quiala teaches an edge-computing system for automatic license plate recognition, comprising:
a photographing module (§ 4.1 Cuban License Plate Dataset – cameras at key locations to capture videos of vehicles entering and exiting an area);
a memory (§§3 Proposal & §4 Experiments – a memory is inherently necessary to store at least the images captured by the cameras and the deep-learning models used to process the video sequences); and
a processor electrically connected with the photographing module and the memory, wherein the processor performs an edge-computing method for automatic license plate recognition (§§3 Proposal & §4 Experiments – a processor is inherently necessary to process at least the images captured by the cameras and execute the deep-learning models used to process the video sequences), and the edge-computing method comprises:
using the photographing module to generate a streaming video having continuous frames (Abstract – License Plate Recognition (LPR) in videos (sequence of frames); § 3.1 Detection and OCR Methods for LPR – combination of video frames);
frame-by-frame determining one or more license plates, and extracting an image of each of the one or more license plates in each of the continuous frames from full frame images of the continuous frames; wherein the image is temporally stored in the memory (§ 3.2. Video Frame Combination Strategies - For each car sequence, we obtain a collection of identified license plates…We conducted tests on various strategies for combining individual license plates across a given video sequence);
frame-by-frame recognizing one or more characters in each of the one or more license plates in each of the continuous frames (§ 3.1 Detection and OCR Methods for LPR – various methods for Optical Character Recognition (OCR)), calculating one or more confidence levels of the one or more characters recognized from every one of the one or more license plates, and obtaining a confidence score from the one or more confidence levels of all of the one or more characters recognized from every one of the one or more license plates (§ 3.2. Video Frame Combination Strategies - For each car sequence, we obtain a collection of identified license plates along with their respective confidences as returned by each of the pipelines described in the previous section. These identified plates may have varying levels of confidence, reflecting the OCR system’s estimation of the accuracy or reliability of the recognized characters on each plate & Table 1);
calculating the confidence score of each of the one or more license plates multiple times in the continuous frames within a period of time, and obtaining a recognition result of a license plate of the one or more license plates having a highest confidence score in one of the continuous frames (§ 3.2. Video Frame Combination Strategies & Table 1 – calculated confidence scores for a car sequence (e.g., video frames 1-10), Maximum confidence (max-conf) - The plate with the highest confidence is selected as the final result.); and
storing an image that corresponds to the license plate having the highest confidence score in the memory (§ 3.2. Video Frame Combination Strategies & Table 1 –Maximum confidence (max-conf) - The plate with the highest confidence is selected as the final result. & § 3.3 Proposed Character-Based Combination Strategy – obtaining the final plate).
Not explicitly taught is storing a full frame image that corresponds to the image of the license plate having the highest confidence score.
Kavner, in the same field of endeavor, teaches a system for reading license plates, wherein a sub-image containing the license plate number is stored along with the full image (¶0081). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Kavner to the Quiala invention and allow the full frame image that corresponds to the image of the license plate having the highest confidence score to be stored. The modification of Quiala in this manner would allow for portions of image other than the license plate to be processed at a later time.
Concerning claim 15, Quiala further teaches the system of claim 11, wherein an intelligent model is incorporated to recognize the one or more characters in each of the one or more license plates (§ 3.1 Detection and OCR Methods for LPR – deep-learning based detectors), and an intelligent algorithm is used to calculate a confidence level of each of the one or more characters in each of the one or more license plates (§ 3.1 Detection and OCR Methods for LPR & §3.3 Proposed Character-Based Combination Strategy – Algorithm to calculate plate); wherein the confidence score of each of the one or more license plates is a statistical value obtained by calculating the one or more confidence levels of the one or more characters in a corresponding one of the one or more license plates (§ 3.2. Video Frame Combination Strategies & Table 1 - Confidence values).
Concerning claim 16, Quiala further teaches the system of claim 15, wherein, when the confidence score of each of the one or more license plates is calculated, a quantity of the one or more recognized characters of the corresponding one of the one or more license plates is required to exceed a character-quantity threshold; wherein, the automatic license plate recognition fails when the quantity of the one or more recognized characters of the corresponding one of the one or more license plates is lower than the character-quantity threshold (§ 3.2 Video Frame Combination Strategies (last paragraph) – Cuban plates contain a valid plate pattern that consists of 7 characters (1 letter followed by 6 digits). Frames 2, 6, 8 and 10 of Table are considered “valid”; § 3.3 Proposed Character-Based Combination Strategy – penalization of plates based on an expected character count of 7).
Concerning claim 17, Quiala further teaches the system of claim 16, wherein multiple confidence scores of the corresponding one of the one or more license plates are calculated at intervals in the continuous frames, and a frame of the continuous frames having the highest confidence score is obtained before the corresponding one of the one or more license plates disappears (§ 3.2 Video Frame Combination Strategies & Table 1- max-conf: The plate with the highest confidence is selected as the final result. & § 3.3 Proposed Character-Based Combination Strategy – obtaining the final plate).
Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Quiala et al. (“Enhancing License Plate Recognition in Videos Through Character-Wise Temporal Combination”, September 27-29, 2023) in view of Kavner (US 20020140577 A1) and Blais-Morin et al. (US 20240143601 A1).
Concerning claim 6, Quiala in view of Kavner teaches the method according to claim 5. Quiala further teaches the method, wherein, when the frame having the highest confidence score of the corresponding one of the one or more license plates is obtained, vehicle information is obtained from the frame by an image processing technology (§ 3.2. Video Frame Combination Strategies & Table 1 –Maximum confidence (max-conf) - The plate with the highest confidence is selected as the final result. & § 3.3 Proposed Character-Based Combination Strategy – obtaining the final plate). Not explicitly taught is the vehicle information being written into metadata of the frame.
In the same field of endeavor, Blais-Morin et al. (hereinafter Blais-Morin) teaches a method for record identification, wherein metadata comprising at least one of a vehicle license plate number, a license plate state, one or more vehicle characteristics, a time at which the image was captured, a location where the image was captured, and a device the image originated from (e.g., an identifier of a camera having captured the image) are added to images (¶0034). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add vehicle information to the frame having the highest confidence score in order to provide information of the vehicle associated with the detected license plate.
Concerning claim 18, Quiala in view of Kavner teaches the system according to claim 17. Quiala further teaches the system, wherein, when the frame having the highest confidence score of the corresponding one of the one or more license plates is obtained, vehicle information is obtained from the frame by an image processing technology (§ 3.2. Video Frame Combination Strategies & Table 1 –Maximum confidence (max-conf) - The plate with the highest confidence is selected as the final result. & § 3.3 Proposed Character-Based Combination Strategy – obtaining the final plate). Not explicitly taught is the vehicle information being written into metadata of the frame.
In the same field of endeavor, Blais-Morin et al. (hereinafter Blais-Morin) teaches a method for record identification, wherein metadata comprising at least one of a vehicle license plate number, a license plate state, one or more vehicle characteristics, a time at which the image was captured, a location where the image was captured, and a device the image originated from (e.g., an identifier of a camera having captured the image) are added to images (¶0034). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add vehicle information to the frame having the highest confidence score in order to provide information of the vehicle associated with the detected license plate.
Claims 7 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Quiala et al. (“Enhancing License Plate Recognition in Videos Through Character-Wise Temporal Combination”, September 27-29, 2023) in view of Kavner (US 20020140577 A1) and Hao et al. (CN112115904A see machine translation).
Concerning claim 7, Quiala in view of Kavner teaches the method according to claim 1. Quiala further teaches the method, comprising operating, by the edge device, an object-detection model for obtaining a boundary frame of the one or more license plates in each of the continuous frames, calculating a probability that the boundary frame is one of the one or more license plates (§4.1 Cuban License Plate Dataset & fig. 2 – boundary frames around the detected license plates; Table 1 – confidence scores). Not explicitly taught is calculating a probability that one of the one or more license plates belongs to a vehicle type, so as to recognize the one or more characters in the one of the one or more license plates.
In the same field of endeavor, Hao et al (hereinafter Hao) teaches license plate recognition that calculates a probability that one of the one or more license plates belongs to a vehicle type, so as to recognize the one or more characters in the one of the one or more license plates (¶¶0051-0053 of the machine translation). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Quiala, Kavner and Hao in order to determine what type of license plate has been detected and if it’s a domestic license plate or an overseas license plate (Hao, ¶0053).
Concerning claim 12, Quiala in view of Kavner teaches the system according to claim 11. Not explicitly taught is the system, wherein the processor operates an intelligent algorithm to train data, so as to obtain an object- detection model and a classification model.
In the same field of endeavor, Hao et al (hereinafter Hao) teaches license plate recognition that operates an intelligent algorithm to train data, so as to obtain an object- detection model and a classification model (¶0115: deep learning is used to train). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Quiala, Kavner and Hao in order to configure and tune the parameters of each convolutional neural network of CNN to obtain the target detection network model (Hao, ¶0115).
Concerning claim 13, Quiala in view of Kavner and Hao teaches the system according to claim 12. Quiala further teaches the method, comprising operating, by the edge device, an object-detection model for obtaining a boundary frame of the one or more license plates in each of the continuous frames, calculating a probability that the boundary frame is one of the one or more license plates (§4.1 Cuban License Plate Dataset & fig. 2 – boundary frames around the detected license plates; Table 1 – confidence scores). Not explicitly taught by Quiala is calculating a probability that one of the one or more license plates belongs to a vehicle type, so as to recognize the one or more characters in the one of the one or more license plates.
In the same field of endeavor, Hao et al (hereinafter Hao) teaches license plate recognition that calculates a probability that one of the one or more license plates belongs to a vehicle type, so as to recognize the one or more characters in the one of the one or more license plates (¶¶0051-0053 of the machine translation). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Quiala, Kavner and Hao in order to determine what type of license plate has been detected and if it is a domestic license plate or an overseas license plate (Hao, ¶0053).
Claims 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Quiala et al. (“Enhancing License Plate Recognition in Videos Through Character-Wise Temporal Combination”, September 27-29, 2023) in view of Kavner (US 20020140577 A1), Hao et al. (CN112115904A see machine translation) and Normington et al. (US 20220309809 A1).
Concerning claim 8, Quiala in view of Kavner teaches the method according to claim 1. Not explicitly taught is the method, comprising operating, by the edge device, a classification model for calculating a probability that one of the one or more license plates is under a vehicle jurisdiction.
In the same field of endeavor, Hao et al (hereinafter Hao) teaches license plate recognition that operates, by the edge device, a classification model for calculating a probability that one of the one or more license plates is under a vehicle jurisdiction (¶0109, ¶0119). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Quiala, Kavner and Hao in order to determine what type of license plate has been detected and if it is a domestic license plate or an overseas license plate (Hao, ¶0053). Not explicitly taught by Quiala, Kavner and Hao is calculating a probability that a vehicle is one of a plurality of colors, and calculating a probability that the vehicle is one of a plurality of brands and models, such as to identify features of the vehicle.
In the same field of endeavor, Normington et al (hereinafter Normington) teaches automatic number plate recognition that calculates a probability that a vehicle is one of a plurality of colors, and calculates a probability that the vehicle is one of a plurality of brands and models, such as to identify features of the vehicle (¶0028). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Quiala, Kavner, Hao and Normington in order to determine the probability that the incoming image data (e.g., a subsequent read) corresponds to the particular profile (Normington, ¶0028).
Concerning claim 14, Quiala in view of Kavner and Hao teaches the system according to claim 12. Hao further teaches license plate recognition that operates, by the edge device, a classification model for calculating a probability that one of the one or more license plates is under a vehicle jurisdiction (¶0109, ¶0119). Not explicitly taught is calculating a probability that a vehicle is one of a plurality of colors, and calculating a probability that the vehicle is one of a plurality of brands and models, such as to identify features of the vehicle.
In the same field of endeavor, Normington et al (hereinafter Normington) teaches automatic license plate recognition (ALPR) that calculates a probability that a vehicle is one of a plurality of colors, and calculates a probability that the vehicle is one of a plurality of brands and models, such as to identify features of the vehicle (¶0028). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Quiala, Kavner, Hao and Normington in order to determine the probability that the incoming image data (e.g., a subsequent read) corresponds to the particular profile (Normington, ¶0028).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Quiala et al. (“Enhancing License Plate Recognition in Videos Through Character-Wise Temporal Combination”, September 27-29, 2023) in view of Kavner and Normington et al. (US 20220309809 A1).
Concerning claim 19, Quiala in view of Kavner teaches the system according to claim 17. Not explicitly taught is the system, wherein, when the frame having the highest confidence score in the continuous frames is obtained, the frame and a recognized string of the corresponding one of the one or more license plates are transmitted to an external system; and wherein the external system is a computer device or a cloud system.
In the same field of endeavor, Normington et al (hereinafter Normington) teaches automatic license plate recognition (ALPR) that transmits license plate information from an ALPR system to external systems (e.g., Vehicle Profile System, Vehicle Recognition System, ALPR Backend System) for processing (fig. 2 & ¶0041). The external systems may be implemented as a cloud system (¶0080). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Quiala, Kavner, and Normington in order to process the license plate information on system with higher computational power if needed.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Quiala et al. (“Enhancing License Plate Recognition in Videos Through Character-Wise Temporal Combination”, September 27-29, 2023) in view of Kavner and Campbell (US 12175578 B1).
Concerning claim 21, Quiala in view of Kavner teaches the method according to claim 1. Not explicitly taught is the method, wherein frame-by-frame recognizing one or more characters in each of the one or more license plates in each of the continuous frames comprises frame-by-frame recognizing a set of characters in each of the one or more license plates in each of the continuous frames, and calculating one or more confidence levels of the one or more characters recognized from every one of the one or more license plates comprises calculating a confidence level for each character of the set of characters recognized from every one of the one or more license plates.
Campbell, in the same field of endeavor, teaches calculating a confidence level for each character of the set of characters recognized from every one of the one or more license plates (col. 56, l. 51 – col. 57, l. 5). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the features of Campbell in order to determine the likelihood that each character is accurate (Campbell, col. 56, ll. 51-57).
Response to Arguments
Applicant’s arguments, see pages 7-10 of the remarks, filed 01/21/2026, with respect to the rejections of claims 1-20 under 35 U.S.C. §§ 102 and 103 have been fully considered, but they are moot in view of new grounds of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M ANDERSON II whose telephone number is (571)270-1444. The examiner can normally be reached Monday - Friday 10AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN PENDLETON can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/James M Anderson II/Primary Examiner, Art Unit 2425