Prosecution Insights
Last updated: April 19, 2026
Application No. 18/027,809

VEHICLE IMAGE ANALYSIS

Final Rejection §103
Filed
Mar 22, 2023
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
UVeye Ltd.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The amendment filed 10/01/2025 overcomes the 101 rejection in regards to the non-final mailed 06/04/2025. The amendment filed 10/01/2025 overcomes the 112b rejection pertaining to claim 10 of the same non-final mentioned above. The IDS’s filed 06/10/2025 and 11/11/2025 have been received and considered. Claims 1, 3, 5 – 10, and 15 – 21 have been amended. Claims 22 – 24 have been newly added. Claims 1 – 24, all of the claims pending in the application, have been rejected. Response to Applicant’s Remarks In view of the Applicant’s arguments filed 10/01/2025, regarding amendments to independent claims 1, 8, and 18 – 21 the previously applied prior art rejections are withdrawn. Applicant's arguments are rendered moot in view of the new grounds of rejection set forth below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 5, 7 – 15, and 17 – 24 are rejected under 35 U.S.C. 103 as being unpatentable over US Publication No. 2015/0103341 A1 to Buehler et al. (hereinafter Buehler) in view of US Patent No. 10706321 B1 to Chen et al. (hereinafter Chen) in further view of US Publication No. 2007/0273471 A1 to Shilling et al. (hereinafter Shilling). Claim 1 Regarding claim 1, Buehler teaches a method for determining a fingerprint of a vehicle, comprising, by a processor and memory circuitry (PMC) (Paragraph [0016]): a) receiving a vehicle pre-defined identifier ("Identifying object labels 436 may provide an anthropogenic identifier analogous to the object appearance and may include license plates, bumper stickers, tail markings, etc. The processor 16 may include, for example, algorithms for optical character recognition to further discriminate object labels.", Paragraph [0039]); b) capturing by at least one sensor at least one vehicle appearance, said at least one sensor is selected from a group consisting of at least an RF sensor, an imaging device and an audio sensor ("The imaging device 14 may include one or more cameras capable of capturing and recording image data at specific wavelengths across the electromagnetic spectrum. Imaging devices for use in the present invention may capture imagery in wavelengths defined by the infrared, visible and ultraviolet bands. Additionally, the imaging device 14 may be configured to collect imagery such that the operable bands of the camera are combined to form a panchromatic image or divided to form a multispectral or hyperspectral datacube. In the case of a panchromatic imager, the imaging device 14 may form a series of panchromatic images where each image is a record of the total intensity of radiation falling onto each pixel of the image. The relationship between the pixels and their relative intensities form the spatial content (or spatial imagery) of the collected series of images.", Paragraph [0019]); c) receiving, from the at least one sensor, at least one vehicle appearance each including at least one image data indicative of at least partial vehicle scan; said appearance is associated with a unique appearance time tag (Fig. 4; #'s 300, 302, and 304; "To illustrate, FIG. 4 demonstrates the spatial portioning of an imaged vehicle for three different orientations 300, 302, 304. For a first imaged side of the vehicle at orientation 300, the processor 16 identifies four spatial portions 310, 312, 314, 316. For a second imaged side of the vehicle at orientation 302, the processor 16 identifies four spatial portions 318, 320, 322, 324. For a third imaged side of the vehicle at orientation 304, the processor 16 identifies four spatial portions 326, 328, 330, 332. The processor 16 then assigns a spectral signature based on the hyperspectral imagery to each of the spatial portions. In this example, there will be four distinct spectral signatures for each of the three imaged orientations for a total of 12 distinct spectral signatures.", Paragraph [0031]; "The processor 16 may also infer an object's history 440 by correlating multiple observations of an object across time. The time scale that the processor 16 may have to correlate across, that is the duration of temporal discontinuity, may range from a few seconds such as when an observed object is temporarily obscured to a time scale on the order of days when an object such as a vehicle that infrequently routes through a viewing footprint of the remote imaging device. Therefore, the object's history may establish patterns in location, behavior over time and changes in physical appearance", Paragraph [0040]), where each orientation observed of the vehicle is equivalent to also being a partial scan; and PNG media_image1.png 452 526 media_image1.png Greyscale d) segmenting said image data into segment data that includes segments each being informative of a respective at least one-sub-component of said vehicle (Figure 4; "The software includes instructions to: observe key characteristics of the object in each of the series of images wherein some of the key characteristics are in the spectral images and some of the key characteristics are in the spatial images; associate the observed key characteristics with the object; and assign a unique identifier to the object based upon the associated key characteristics.", Paragraph [0003]), wherein the characteristics associated with each segment in figure 4 can be considered sub-components of the vehicle. Buehler does not teach e) determining a plurality of marker instances from said image scan or segment data, wherein each marker instance is associated with a marker class and at least one marker feature; f) storing in the storage data indicative of the vehicle's fingerprint, for subsequent authentication of the same vehicle in a later independent scan, including said vehicle pre- defined identifier and at least its corresponding (i) vehicle appearance and associated appearance time tag, (ii) the so determined marker instances; and controlling access to a facility, based on verification of the fingerprint of said vehicle. However, Chen teaches determining a plurality of marker instances from said image scan or segment data, wherein each marker instance is associated with a marker class and at least one marker feature (Figure 15; "While this description particularly indicates that the block 312 uses CNNs 134 to detect damage to the target vehicle, it should be noted that the block 312 could use other types of statistical processing techniques to detect or classify damage or changes to particular target object components", Column 27, lines 34 - 38; "Moreover, the CNNs 134 or other deep learning tools may provide other possible outputs including, for example, a probability of a patch having damage (e.g., a number between 0 and 1), an indication of one or more types of damage detected (e.g., creases, dents, missing parts, cracks, scuffs, scrapes, scratches, etc.), an indication of damage severity (e.g., different damage levels based on, for example, the amount of labor hours required to repair or replace the component), an indication of a probability of hidden damage (e.g., damage not visible in the target images), an indication of the age of damage (e.g., whether the damage is prior damage or not), an indication of a repair cost for each involved body panel, an indication of a final repair cost, a confidence level with respect to prediction accuracy, etc. Still further, the CNNs or other deep learning technique may use, as inputs, a full set of target vehicle photos (not local patches), telematics data, video data, geographic variation data, etc.", Column 27, lines 50 - 67), wherein the marker class in this case would be the vehicle damage sustained, and the feature correlates to the type of damage (i.e. dents, scratches, scrapes, etc…as previously stated); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Buehler to incorporate marker instances within the segmented image, each instance having a feature, as disclosed by Chen. Buehler generally contemplates tracking changes in physical appearance of a vehicle over time at [0040]; Chen shows a particular methodology for tracking damage to a vehicle over time. Col 1:43-60. It would have been obvious to select a known methodology to implement the general instructions of Buehler. Buehler, now modified by Chen, teaches f) storing in the storage data indicative of the vehicle's fingerprint, for subsequent authentication of the same vehicle in a later independent scan, including said vehicle pre- defined identifier and at least its corresponding (i) vehicle appearance and associated appearance time tag, (ii) the so determined marker instances (Rejected as applied directly above limitations in conjunction with Paragraphs [0017] and [0042 - 0043]), where paragraph [0017] explains how the historical observations can be compared with the observations of the object at a later time for verification of the object's identity. Neither Buehler, or Chen, or the combination teaches controlling access to a facility, based on verification of the fingerprint of said vehicle. However, Shilling teaches controlling access to a facility, based on verification of the fingerprint of said vehicle ("One embodiment of the present invention is a system for authorizing a waste management vehicle to proceed beyond an access point of a waste receivable environment. The system includes an identification reader configured to obtain vehicle identification information, hauler identification information, and personnel identification information from one or more identification mechanisms…The computer system is also configured to determine whether the vehicle and the personnel are authorized to proceed beyond the access point using the identification information and biometric information, and transmit a signal to a control mechanism to allow the vehicle and the personnel to proceed beyond the access point.", Paragraph [0008]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Buehler, in view of Chen, to incorporate using a stored vehicle profile as a validation method for pre-authorized vehicles when at a vehicle entry control point or ECP, as disclosed by Shilling. The suggestion/motivation for doing so would have been to ensure the security of a building, area, or facility by maintaining a “zero” presence of unauthorized vehicles. Claim 2 Regarding claim 2, dependent on claim 1, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 1. Buehler, in view of Chen and in further view of Shilling, further teach determining at least one marker feature instance for each one of said marker instances and for at least one of said the marker classes of the marker instance, determining at least one marker feature instance that is location dependent (Rejected as applied to claim 1); and for each marker instance, storing the so determined marker feature instances including said component dependent marker features (Rejected as applied to claim 1), where the marker instances that are being stored would incorporate the marker class and corresponding marker features as shown in Figures 7, 9, and 10 (Chen). PNG media_image2.png 369 470 media_image2.png Greyscale PNG media_image3.png 284 415 media_image3.png Greyscale PNG media_image4.png 403 481 media_image4.png Greyscale Claim 3 Regarding claim 3, dependent on claim 1, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 1. Buehler further teaches including with respect to said vehicle, repeating said (c) to (f) for at least one more vehicle appearance having a corresponding unique time tag, and for at least one other vehicle ("With the advent of video hyperspectral sensors, one system may gather sufficient information to identify uniquely the observed objects. Multiple systems may act independently where each system may gather sufficient information to uniquely identify the observed objects. In the multiple system modality, information may then be shared among the systems to aggregate key characteristics.", Paragraph [0044]), wherein the objects in this case are different vehicles. Claim 4 Regarding claim 4, dependent on claim 1, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 1. Buehler, although it teaches identifying characteristics of a vehicle which could include (dents, scatches, etc.), does not explicitly teach wherein said marker class is selected from the group that includes: scratches, dents, handwriting, color, rust mark, cross-type screws, and printed text. However, Chen teaches wherein said marker class is selected from the group that includes: scratches, dents, handwriting, color, rust mark, cross-type screws, and printed text (" In cases in which the routine 300 determines types or probable types of damage, colors, icons, or other indicia may be used to indicate the type of damage (e.g., scratches, folds, dents, bends, etc.). FIG. 15 illustrates an example set of heat maps for the target vehicle of FIG. 14 showing an analysis of each of the four corner images of the target vehicle as determined from target vehicle images.", Column 28, lines 37 - 44). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Buehler, in view of Chen and Shilling, to further incorporate a marker class group that includes scratches, dents etc., as disclosed by Chen. The suggestion/motivation for doing so would have been to provide unique details pertaining to specific vehicle profiles that could be used to identify a vehicle with higher accuracy. Claim 5 Regarding claim 5, dependent on claim 1, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 1. Buehler further teaches wherein at least one of said at least one sensor ("The imaging device 14 may include one or more cameras capable of capturing and recording image data at specific wavelengths across the electromagnetic spectrum. Imaging devices for use in the present invention may capture imagery in wavelengths defined by the infrared, visible and ultraviolet bands.", Paragraph [0019]). Claim 7 Regarding claim 7, dependent on claim 1, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 1. Buehler, in view of Chen and in further view of Shilling, further teach wherein at least one of saidat least one sensor is an electromagnetic sensor and further comprising obtaining at least one electromagnetic scan of the vehicle (Rejected as applied to claim 5) and determining at least one electromagnetic marker class from said scan informative of a mark concealed underneath a non-metal surface of the vehicle ("…an indication of a probability of hidden damage (e.g., damage not visible in the target images).", Chen Column 27, lines 58-59). Claim 8 Regarding claim 8, an independent method claim, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 1. Buehler, in view of Chen and in further view of Shilling, further teach a method for verifying a fingerprint of a vehicle, comprising by a processor and associated memory storage (Rejected as applied to claim 1): a) receiving a vehicle pre-defined identifier (Rejected as applied to claim 1); b) capturing by at least one sensor at least one vehicle appearance, said at least one sensor is selected from a group consisting of at least an RF sensor, an imaging device and an audio sensor (Rejected as applied to claim 1); c) receiving from the at least one sensor at least one new vehicle appearance, each including at least one image data indicative of at least a partial vehicle scan (Rejected as applied to claim 1); said appearance is associated with a unique appearance time tag (Rejected as applied to claim 1); d) segmenting said image data into segment data that includes segments each being informative of a respective at least one-sub-component of said vehicle (Rejected as applied to claim 1); e) determining a plurality of new marker instances from said image scan or segment data, wherein each new marker instance is associated with a marker class and at least one marker feature (Rejected as applied to claim 1); h) controlling access of said vehicle to a facility, based on said validation (Rejected as applied to claim 1). Buehler further teaches f) extracting from said storage previously stored at least one vehicle appearance that is associated with said vehicle pre-defined identifier and at least one of its corresponding marker instances ("The processor 16 may infer other key characteristics with additional processing including object appearance 434, identifying object labels 436, object behavior 438 and object history 440. The object appearance 434 includes the nuanced and potentially unique aspects of the surface of the object. For example, a dent on a car or an additional antenna mounted to the roof of a vehicle may provide a specific identifying feature that the processor 16 may observe and detect with spatial and spectral processing techniques.", Paragraph [0038]; "To facilitate the referencing of objects of interest for archival and retrieval, the processor 16 may assign a single unique identifier 444 to reference the object through its life cycle as observed by the remote imaging system. The unique identifier 444 may encode key characteristics associated with the object; that is, the visual, spectral and behavioral characteristics along with the historical characteristics as described above.", Paragraph [0043]); and g) comparing at least one new marker instance of the new appearance with a corresponding marker instance of at least one previously stored appearance of the same vehicle pre-defined identifier, and validating said vehicle fingerprint if a matching criterion is met ("The processor 16 may further analyze the spatial and spectral imagery to identify uniquely the object of interest. That is, the processor 16 may analyze a spatial/spectral characterization (such as the multi-dimensional spectral reflectance profile described above) to derive and associate key characteristics of the object 30 with a goal of identifying the individual instance of the object being imaged. In this way, beyond merely recognizing the type of object, the processor 16 may fingerprint the particular object.", Paragraph [0032]). Claim 9 Regarding claim 9, dependent on claim 8, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 8. Buehler does not teach wherein said (e) further includes determining at least one new marker feature instance for each one of said new marker instances and for at least one of the marker classes of said new instances, determining at least one marker feature instance that is location dependent, and wherein said extracting step includes: for each marker instance, extracting determined marker features including location dependent marker features, and wherein said comparing includes comparing each new marker instance with at least one marker instance of at least one previously stored appearance of said vehicle for determining respective similarity scores, and in case at least one of said similarity scored exceeds a threshold validating said new marker instance, determining if said matching criterion is met based on at least the validated marker instances and the corresponding stored marker instances. However, Chen teaches wherein said (e) further includes determining at least one new marker feature instance for each one of said new marker instances and for at least one of the marker classes of said new instances, determining at least one marker feature instance that is location dependent, and wherein said extracting step includes: for each marker instance, extracting determined marker features including location dependent marker features, and wherein said comparing includes comparing each new marker instance with at least one marker instance of at least one previously stored appearance of said vehicle for determining respective similarity scores, and in case at least one of said similarity scored exceeds a threshold validating said new marker instance, determining if said matching criterion is met based on at least the validated marker instances and the corresponding stored marker instances ("The server may then determine (block 1870) whether the determined quality or completeness of the portion of the set of images at least meets a given threshold criteria. Generally, the threshold criteria may account some combination of a sufficient amount and type of image perspective(s), a sufficient amount and type of identified vehicle components, a sufficient image quality, and/or other characteristics. For example, a threshold criteria for a given base image model may specify that “front right,” “back right,” “front left,” and “back left” image perspectives are required, and that a vehicle hood and four (4) doors must be depicted in the portion of the set of images. It should be appreciated that the threshold criteria may vary depending on the base image model.", Column 40, lines 46 - 60"; "After receiving the image(s) from the electronic device, the server may analyze the image(s) to identify the target vehicle depicted in the image(s) and determine whether the quality and/or characteristics of the image(s) at least meet the threshold criteria for the corresponding base image model, as discussed herein.", Column 42, lines 10 - 16). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Buehler, in view of Chen and Shilling, to further incorporate comparing previously stored features with newly taken features to determine to a threshold if they belong to the same vehicle, as disclosed by Chen. The suggestion/motivation for doing so would have been by comparing the newly taken features of a vehicle to the previously stored, the user can determine that the images belong to the same vehicle. Claim 10 Regarding claim 10, dependent on claim 9, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 9. Buehler further teaches further comprising for at least one validated marker instance of the newly acquired appearance, determining, utilizing a narrowing criterion, at least one candidate reference marker instance out of a larger number of stored reference marker instances of at least one vehicle appearance, and determining whether said matching criterion is met based on at least the validated marker instances and the corresponding stored candidate reference marker instances .("Because the unique identifier is based upon the intrinsic characteristics of the object that are independent of the characteristics of the imaging system, the tracking system can correlate separate observances of an object when historical information about the object life is initially unknown. In other words, the system may observe a previously unknown object, assign a unique identifier according to the method of the present invention and then observe the object at a later time and correctly associate the observations.", Paragraph [0017]). Claim 11 Regarding claim 11, dependent on claim 9, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 9. Buehler does not teach wherein said matching criterion is met if the number of validated marker instances out of the corresponding stored marker instances exceeds a given threshold. However, Chen teaches wherein said matching criterion is met if the number of validated marker instances out of the corresponding stored marker instances exceeds a given threshold ("According to embodiments, the server may perform processing on the additional images as depicted in FIGS. 18A and B, and as discussed herein, until the server identifies a portion of the received images that have qualities and characteristics that at least meet the threshold criteria of the base image model.", Column 41, lines 35 - 40). Claim 12 Regarding claim 12, dependent on claim 8, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 8. Buehler does not teach wherein at least some of said features are component dependent, and wherein said comparison is segment dependent thereby reducing the false alarms and computational complexity of said comparison. However, Chen teaches wherein at least some of said features are component dependent, and wherein said comparison is segment dependent thereby reducing the false alarms and computational complexity of said comparison ("The image processing system may then, using the contours of the target object as defined in the base object model, identify the same components in the aligned target object image and may delete or remove background pixels or other extraneous information based on this comparison.", Column 2, lines 49 - 53), wherein the contours can be considered segments of an image and the removal of the "extraneous information" will "reduce" false alarms and computational complexity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Buehler, in view of Chen and Shilling, to further incorporate features being component dependent so that the compared segments have reduced false alarms and computational complexity, as disclosed by Chen. The suggestion/motivation for doing so would have been to increase the accuracy while reducing the time needed to reach a conclusion. Claim 13 Regarding claim 13, dependent on claim 8, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 8. Buehler further teaches wherein in case that a matching criterion is met with respect to the validated vehicle, the new marker instances of the validated vehicle that did not meet the similarity score are stored together with their associated feature instances for improving future vehicle verification ("Subsequent to the creation of the identifier, the identifier may provide a reference to the object for adding new key characteristics or retrieving known characteristics of the related object.", Paragraph [0043]), where the new characteristics (similar or not) can be stored and retrieved later for future reference. Claim 17 Regarding claim 17, dependent on claim 8, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 8. Buehler, in view of Chen and in further view of Shilling, further teach wherein at least one of saidat least one sensor is an electromagnetic sensor and further comprising obtaining at least one electromagnetic scan of the vehicle (Rejected as applied to claim 5) and determining at least one electromagnetic marker class from said scan informative of a mark concealed underneath a non-metal surface of the vehicle (Rejected as applied to claim 7). Buehler further teaches wherein said extracting and comparing apply also to said electromagnetic marker classes ("These hyperspectral search algorithms are typically designed to exploit statistical characteristics of candidate targets in the imagery and are typically built upon well-known statistical concepts. For example, Mahalanobis distance is a statistical measure of similarity that has been applied to hyperspectral pixel signatures. Mahalanobis distance measures a signature's similarity by testing the signature against an average and standard deviation of a known class of signatures.", Paragraph [0036]; "Other known techniques include Spectral Angle Mapping (SAM), Spectral Information Divergence (SID), Zero Mean Differential Area (ZMDA) and Bhattacharyya Distance. SAM is a method for comparing a spectral signature to a known signature by treating each spectra as vectors and calculating the angle between the vectors. Because SAM uses only the vector direction and not the vector length, the method is insensitive to variation in illumination. SID is a method for comparing a spectral signature to a known signature by measuring the probabilistic discrepancy or divergence between the spectra. ZMDA normalizes the signatures by their variance and computes their difference, which corresponds to the area between the two vectors. Bhattacharyya Distance is similar to Mahalanobis Distance but is used to measure the distance between a set of spectral signatures against a known class of signatures.", Paragraph [0036]), wherein these various techniques known in the field of computer vision can be used, as explicitly stated in Paragraph [0034]. Claims 14 – 15 are rejected for the same reasons applied to the above claims. Claim 18, an independent system claim, is rejected for the same reasons as applied to claim 1. Claim 19, an independent system claim, is rejected for the same reasons as applied to claim 8. Claim 20, an independent non-transitory computer readable medium claim, is rejected for the same reasons as applied to claim 1. Claim 21, an independent non-transitory computer readable medium claim, is rejected for the same reasons as applied to claim 8. Claims 22 – 24 are rejected for the same reasons as applied to claim 1. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over US Publication No. 2015/0103341 A1 to Buehler et al. (hereinafter Buehler) in view of US Patent No. 10706321 B1 to Chen et al. (hereinafter Chen) in further view of US Publication No. 2007/0273471 A1 to Shilling et al. (hereinafter Shilling) and further in view of Non Patent Literature "Vehicle detection and classification using audio-visual cues" to Piyush et al. (hereinafter Piyush). Claim 6 Regarding claim 6, dependent upon claim 1, Buehler, in view of Chen and Shilling, teach the invention as claimed in claim 1. Neither Buehler, or Chen, or Shilling, or the combination teach wherein at least one of saidat least one sensor is an audio sensor and further comprising obtaining at least one audio scan of the vehicle and determining at least one audio marker class from said scan informative of sound of at least one module associated with said vehicle. However, Piyush teaches wherein at least one of said sensors at least one sensor is an audio sensor and further comprising obtaining at least one audio scan of the vehicle and determining at least one audio marker class from said scan informative of sound of at least one module associated with said vehicle (Figure 3; Table I; "For vehicle detection, first the audio signal of the video file was separated using ‘Format factory’, a freeware program. The audio file originally at 44100 Hz with mono channel in way file format was then re-sampled into 11025 Hz. Then short term energy (STE) of the audio signal is computed using a Hamming window of 20 ms size and 5 ms shift. STE is then smoothed using Bessel's low pass filter in order to remove high frequency fluctuations [6]. The smoothed STE represented in logarithmic scale is shown in Fig. 3(b). Comparison of audio signal and it's STE in Fig. 3(a) & (b) clearly indicate that sharp peaks in the STE contour corresponds to the presence of vehicle in front of the camera.", Section III - The Proposed System: Part B 'Vehicle Dectection and Video Frame Extraction'). PNG media_image5.png 381 366 media_image5.png Greyscale PNG media_image6.png 341 351 media_image6.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Buehler, in view of Chen and Shilling, to further incorporate the use of an audio sensor and determining a vehicle audio class associated with the captured signal, as disclosed by Piyush. The suggestion/motivation for doing so would have been to further narrow the possible vehicle type candidates being observed by the sensors, leading to a much faster and more accurate determination of the unique vehicle identifier from a database. Claim 16, dependent upon claim 8, is rejected as applied to claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Mar 22, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §103
Oct 01, 2025
Response Filed
Jan 07, 2026
Final Rejection — §103
Apr 07, 2026
Request for Continued Examination
Apr 15, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month