Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Moustafa (US 2021/0103747 A1), Jakobsen (US 2022/0092343 A1), and Tafazzoli (2017, IEEE Xplore).
Regarding claim 1, Moustafa teaches a method comprising:
recording a first acoustic output of a target vehicle [[abstract] a vehicle identification circuit to identify a type of vehicle based on the image event and the sound event];
accepting, as input, a vehicle type representing at least a [type] of the target vehicle [[abstract] vehicle identification circuit to identify a type of vehicle based on the image event and the sound event ; [0031] sensor array interface 106 may be used to provide input or output signals to the vehicle recognition platform 102 from one or more sensors of a sensor array installed on the vehicle 104. Examples of sensors include, but are not limited to microphone arrangement 116; forward, side, or rearward facing cameras such as the image capture arrangement 115; radar; LiDAR; ultrasonic distance measurement sensors; the light sensor 117; or other sensors]:
retrieving, from a database, a second acoustic output of a reference vehicle that is assigned to the vehicle type [[0088] data structure lookup is performed (e.g., by referencing the audio-image association represented by the data structure; [0097] database of the sirens and emergency vehicles … may use a continuous learning module … vehicle identification circuit…allows the image capture arrangement to verify the detection and feedback to the system for any correction];
acoustically detecting that the target vehicle has different characteristics than the reference vehicle based at least on an acoustic dissimilarity between the first acoustic output and the second acoustic output [[abstract] machine learning technique; [0008] deep learning model used for vehicle recognition; [0025] simultaneous multi-modal audio, light, and image inference techniques as discussed herein can aggregate the inference processing].
Moustafa does not explicitly teach and yet Jakobsen teaches generating an electronic alert that the target vehicle is potentially disguised as the vehicle type based on the acoustic dissimilarity [[0015] re-identification systems may be used with various types of media. Apart from image data of persons, re-identification systems may also be applied on images of vehicles or animals, or other types of media may be used altogether. For example, the media data may be one of image data, video data, audio data, a three-dimensional representation of movement of an object and text-based media data; [0047] re-identification code 140 c being dissimilar (i.e. different) from the other re-identification codes 140 a; 140 b. In other words, the re-identification codes 140 a; 140 b for the two first images 130 a; 130 b (which feature the same person from different angles) are similar whereas the codes 140 b; 140 c for the two final images 130 b; 130 c are different (since the images are of two different persons); [0063] media data may accordingly originate from various types of media data generation devices, such as cameras or camera sensors, microphones, three-dimensional scanners or text acquisition systems].
It would have been obvious to combine the image and sound vehicle identification as taught by Moustafa, with the dissimilarity metric as taught by Jakobsen so that filtering of duplicate detections when counting unique vehicles may be performed (Jakobsen) [[0047-0048]].
Moustafa does not explicitly teach and yet Tafazzoli teaches representing at least a make and model of the target vehicle [[title] Large and Diverse Dataset for Improved Vehicle Make and Model Recognition; [pg. 2, col. 1]].
It would have been obvious to modify the identification of vehicle type as taught by Moustafa, with identification of make and model of vehicle type as taught by Tafazzoli so that ambiguity among various vehicle makes and models may be resolved (Tafazzoli) [[abstract]].
Regarding claim 4, Moustafa also teaches the method of claim 1, further comprising passively recording the first acoustic output with one or more acoustic transducers that are located remotely from the target vehicle [[0023] sound sensors (including microphones or other sound sensors used for vehicle detection such as emergency vehicle detection), ultrasound, infrared, or other sensor systems].
Regarding claim 5, Moustafa does not explicitly teach and yet Jakobsen teaches the method of claim 1, wherein acoustically detecting that the target vehicle has different characteristics than the vehicle type the target vehicle being not of the first configuration includes the target vehicle being a modified version of the reference vehicle [[0015]; [0047] re-identification code 140 c being dissimilar (i.e. different) from the other re-identification codes 140 a; 140 b. In other words, the re-identification codes 140 a; 140 b for the two first images 130 a; 130 b (which feature the same person from different angles) are similar whereas the codes 140 b; 140 c for the two final images 130 b; 130 c are different (since the images are of two different persons); [0063] media data may accordingly originate from various types of media data generation devices, such as cameras or camera sensors, microphones, three-dimensional scanners or text acquisition systems].
It would have been obvious to combine the image and sound vehicle identification as taught by Moustafa, with the dissimilarity metric as taught by Jakobsen so that filtering of duplicate detections when counting unique vehicles may be performed (Jakobsen) [[0047-0048]].
Regarding claim 6, Moustafa does not explicitly teach and yet Jakobsen teaches the method of claim 1, further comprising generating a graphical user interface that includes a status of the electronic alert and a measure of the acoustic dissimilarity [[0088-0089] the evaluation device 30 may provide a visualization 34 to the end user. For example, the visualization may show a result of the re-identification being performed by the evaluation device 30; [0102] re-identification code that is dissimilar to the given re-identification code; [0112] steps, operations or processes of different ones of the methods described above may also be executed by programmed computers … graphics processor units].
It would have been obvious to combine the image and sound vehicle identification as taught by Moustafa, with the dissimilarity metric as taught by Jakobsen so that filtering of duplicate detections when counting unique vehicles may be performed (Jakobsen) [[0047-0048]].
Claims 7-8, 13, 15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Moustafa (US 2021/0103747 A1), Jakobsen (US 2022/0092343 A1), Tafazzoli (2017, IEEE Xplore), and Riethmueller (US 2013/0042262 A1).
Regarding claim 7, Moustafa does not explicitly teach and yet Riethmueller teaches the method of claim 1, further comprising; generating a target acoustic fingerprint from the first acoustic output; and wherein acoustically detecting that the target vehicle has different characteristics than the reference vehicle comprises: detecting that the target vehicle has different audio characteristics than the reference vehicle based on at least mean absolute errors between the target acoustic fingerprint and a reference acoustic fingerprint generated from the second acoustic output of the reference vehicle [[0105] the broadcast and reference media content may be of any suitable type including, for example, audio, video, combined audio and video, digital information (including metadata attached, embedded or otherwise related to other media types), etc. The reference media content can be obtained from any source able to store, record, or play media (e.g., a broadcast television source, network server source, a digital video disc source, etc.); [0107] monitoring module generates descriptors, such as digital signatures—also referred to herein fingerprints—from the received broadcast media content. In various embodiments, the digital signatures describe specific video, audio and/or audiovisual aspects of the content, such as color distribution, shapes, and patterns in the video parts and the frequency spectrum in the audio stream. Each sample of media may be assigned a (potentially unique) fingerprint that is basically a compact digital representation of its video, audio, and/or audiovisual characteristics; [0108] monitoring module utilizes such descriptors to conduct comparisons to find identical, similar and/or different frame sequences or clips in a reference media. In other embodiments, this comparisons may be carried out as a direct comparison of media streams, without the generation of descriptors; [0123] if the level of similarity is determined to be above a threshold level, the broadcast media sequence may be identified with the reference media sequence. The level of similarity between the descriptors may be calculated based on any suitable metric including, for example a pixel by pixel comparison of image frames, a Minkowsi type metric, Mean Square Error type metric, Mean Absolute Error metric, etc. The level of similarity may be calculated using any of the comparison techniques described below and/or set forth in the international patent applications incorporated by reference above].
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the sound and image detection of types of vehicles as taught by Moustafa, with the acoustic fingerprinting as taught by Riethmueller so that similarity between a media source and a reference media may be calculated using mean absolute error metric (Riethmueller) [[0123]].
Regarding claim 8, Moustafa teaches a non-transitory computer-readable medium storing computer-executable instructions that when executed by at least a processor of a computer cause the computer to:
record a target acoustic output of a target vehicle, wherein the target vehicle includes a vehicle type representing at least a [type] [[abstract][0028][0031]];
retrieve, from a database, a reference acoustic output of a reference vehicle that is assigned to the vehicle type [[0097]];
acoustically detect whether the target vehicle matches the reference vehicle within a threshold [[0065] accuracy threshold. Similarly, if a given model is inaccurate enough to satisfy a random chance threshold (e.g., the model is only 55% accurate in determining true/false outputs for given inputs),] based at least on an acoustic similarity to the reference acoustic output [[abstract][0008][0025]].
Moustafa does not explicitly teach and yet Jakobse teaches generate an electronic alert that the target vehicle is or is not similar to the reference vehicle based on the acoustic similarity to the reference acoustic output [[0015][0047][0063]].
It would have been obvious to combine the image and sound vehicle identification as taught by Moustafa, with the dissimilarity metric as taught by Jakobsen so that filtering of duplicate detections when counting unique vehicles may be performed (Jakobsen) [[0047-0048]].
Moustafa does not explicitly teach and yet Tafazzoli teaches representing at least a make and model of the target vehicle [[title] Large and Diverse Dataset for Improved Vehicle Make and Model Recognition; [pg. 2, col. 1]].
It would have been obvious to modify the identification of vehicle type as taught by Moustafa, with identification of make and model of vehicle type as taught by Tafazzoli so that ambiguity among various vehicle makes and models may be resolved (Tafazzoli) [[abstract]].
Moustafa does not explicitly teach and yet Riethmueller teaches generate a target acoustic fingerprint from the target acoustic output, and match based at least on acoustic similarity between the target acoustic fingerprint and the reference acoustic fingerprint [[0105][0107][0108][0123]].
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the sound and image detection of types of vehicles as taught by Moustafa, with the acoustic fingerprinting as taught by Riethmueller so that similarity between a media source and a reference media may be calculated using mean absolute error metric (Riethmueller) [[0123]].
Regarding claim 13, Moustafa does not explicitly teach and yet Jakobsen teaches the non-transitory computer-readable medium of claim 8, wherein the instructions further cause the computer system to generate a graphical user interface that includes a status of the electronic alert [[0088-0089][0102][0112]].
It would have been obvious to combine the image and sound vehicle identification as taught by Moustafa, with the dissimilarity metric as taught by Jakobsen so that filtering of duplicate detections when counting unique vehicles may be performed (Jakobsen) [[0047-0048]].
Regarding claim 15, Moustafa teaches computing system, comprising:
a processor [[0027] vehicle recognition platform includes a light processor … image processor];
a memory operably connected to the processor [[0067] machine learning that includes a memory];
a non-transitory computer-readable medium operably connected to the processor and memory and storing computer-executable instructions that when executed by at least the processor cause the computing system to [[0108] machine-readable storage device may include any non-transitory mechanism for storing information in a form readable by a machine (e.g., a computer)]:
record a first acoustic output of a target vehicle, wherein the target vehicle includes a vehicle type representing at least a [type] [[abstract][0028][0031]];
retrieve, from a database, a second acoustic output of a reference vehicle associated to the vehicle type [[0097]];
acoustically detect that the target vehicle has different characteristics than the reference vehicle based at least on an acoustic [similarity] between the first acoustic output and the second acoustic output [[abstract][0008][0025]] [0065]].
Moustafa does not explicitly teach and yet Jakobse teaches generate an electronic alert that the target vehicle is a modified version of the reference vehicle based on the acoustic dissimilarity [[0015][0047][0063]].
It would have been obvious to combine the image and sound vehicle identification as taught by Moustafa, with the dissimilarity metric as taught by Jakobsen so that filtering of duplicate detections when counting unique vehicles may be performed (Jakobsen) [[0047-0048]].
Moustafa does not explicitly teach and yet Tafazzoli teaches representing at least a make and model of the target vehicle [[title] Large and Diverse Dataset for Improved Vehicle Make and Model Recognition; [pg. 2, col. 1]].
It would have been obvious to modify the identification of vehicle type as taught by Moustafa, with identification of make and model of vehicle type as taught by Tafazzoli so that ambiguity among various vehicle makes and models may be resolved (Tafazzoli) [[abstract]].
Moustafa does not explicitly teach and yet Riethmueller teaches generate a target acoustic fingerprint from the target acoustic output, and match based at least on acoustic similarity between the target acoustic fingerprint and the reference acoustic fingerprint [[[0105][0107][0108][0123]].
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the sound and image detection of types of vehicles as taught by Moustafa, with the acoustic fingerprinting as taught by Riethmueller so that similarity between a media source and a reference media may be calculated using mean absolute error metric (Riethmueller) [[0123]].
Regarding claim 18, Moustafa teaches the computing system of claim 15, wherein the instructions further cause the computing system to passively record the first acoustic output with one or more acoustic transducers that are located remotely from the target vehicle [[0023] sound sensors (including microphones or other sound sensors used for vehicle detection such as emergency vehicle detection), ultrasound, infrared, or other sensor systems].
Regarding claim 19, Moustafa teaches the computing system of claim 15, wherein the target vehicle is a watercraft or an aircraft [[0028] vehicle 104, which may also be referred to as an “ego vehicle” or “host vehicle”, may be any type of vehicle, such as a commercial vehicle, a consumer vehicle, a recreation vehicle, a car, a truck, a motorcycle, a boat, a drone, a robot, an airplane, a hovercraft, or any mobile craft able to operate at least partially in an autonomous mode].
Regarding claim 20, Moustafa teaches the computing system of claim 15, wherein the instructions further cause the computing system to generate a graphical user interface that includes a status of the electronic alert [[0041] when a police siren is detected by the vehicle identification circuit 105 using multimodal data (e.g., audio data, image data, outdoor light signals detected by corresponding sensors), an icon or other graphic representation may be presented on an in-dash display in the vehicle 104 to alert the occupant].
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Moustafa (US 2021/0103747 A1), Jakobsen (US 2022/0092343 A1), Tafazzoli (2017, IEEE Xplore), and Riethmueller (US 2013/0042262 A1) as applied to claim 8 above, and further in view of Groot (2010, Thesis).
Regarding claim 12, Moustafa does not explicitly teach and yet Groot teaches the non-transitory computer-readable medium of claim 8, wherein the instructions further cause the computer to search a library for the reference acoustic fingerprint that is associated with the vehicle type [[pg. 5] secondly, extract features which allow a classification method to discriminate between the different target classes … feature extraction of the classifier provided discriminative features which allowed a classification method to discriminate between the classes; [pg. 113 classification] E.1 Minimum Distance (MD) the MD classifier, also known as nearest neighbor, is searching for the training feature, which is at minimum distance from the input feature. The feature distance function can be defined in many ways, but is mostly defined as the root of the sum of the squared distances; [sec. 1.2 acoustic sensor networks] this project will investigate acoustic sound recorded with microphones; [sec. 1.4.4 feature extraction] feature extraction is the challenging aspect required for target classification. Feature extraction is equivalent to making an acoustic class fingerprint of the recorded signal.; [sec. 2.2 vehicle running piston engine] measurements of a Renauult Laguna with a 2.2 liter engine and four cylinders … similar measurements as above were done with a Honda Civic Shuttle with a four cylinder piston engine].
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the sound and image detection of types of vehicles as taught by Moustafa, with the acoustic fingerprinting similarity as taught by Groot so that an acoustic sound network may be used to match certain target classes (Groot) [[sec. 9.1]].
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Moustafa (US 2021/0103747 A1), Jakobsen (US 2022/0092343 A1), and Tafazzoli (2017, IEEE Xplore) as applied to claim 1 above, and further in view of Sciabica (2012, AES).
Regarding claim 3, Moustafa does not explicitly teach and yet Sciabica the method of claim 1, wherein recording the first acoustic output further comprises: collecting engine noise that is produced by operation of an engine of the target vehicle [[title] dissimilarity test modelling by time-frequency representation applied to engine sound; [pg. 2, col. 1] we apply this method to synthesized engine sounds obtained with the so-called HARTIS synthesizer [10] developed at Peugeot-Citroen, and engine sounds recorded in different cars.], wherein the first acoustic output includes the engine noise; and storing the engine noise as one or more time series [[sec. 2.1.2] interior car sounds are recorded with a dummy head in 12 cars from different manufacturers during acceleration. We only conserved the part of the acceleration corresponding to a motor rotation speed between 3500 and 4300 rotations per minute. The sounds were scaled in time and lasted for 2 seconds. In addition, they all had the same rotation speed variation over time.].
It would have been obvious to combine the identification of vehicle type as taught by Moustafa, with the recording of vehicle engine sounds as taught by Sciabica so that dissimilarity with measured sounds may be evaluated (Sciabica) [[abstract]].
Regarding claim 10, Moustafa does not explicitly teach and yet Sciabica the non-transitory computer-readable medium of claim 8, wherein the instructions to record the target acoustic output further cause the computer system to: collect engine noise that is produced by operation of an engine of the target vehicle, wherein the target acoustic output includes the engine noise; and store the engine noise as one or more time series [[title][pg. 2, col. 1][sec. 2.1.2]].
It would have been obvious to combine the identification of vehicle type as taught by Moustafa, with the recording of vehicle engine sounds as taught by Sciabica so that dissimilarity with measured sounds may be evaluated (Sciabica) [[abstract]].
Regarding claim 17, Moustafa does not explicitly teach and yet Sciabica the computing system of claim 15, wherein the instructions to record the first acoustic output further cause the computing system to: collect engine noise that is produced by operation of an engine of the target vehicle; and store the engine noise as part of the first acoustic output [[title][pg. 2, col. 1][sec. 2.1.2]].
It would have been obvious to combine the identification of vehicle type as taught by Moustafa, with the recording of vehicle engine sounds as taught by Sciabica so that dissimilarity with measured sounds may be evaluated (Sciabica) [[abstract]].
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Moustafa (US 2021/0103747 A1), Jakobsen (US 2022/0092343 A1), Tafazzoli (2017, IEEE Xplore, and Riethmueller (US 2013/0042262 A1) as applied to claim 8 above, and further in view of Zhu (2020, IEEE).
Regarding claim 11, Moustafa does not explicitly teach and yet Zhu teaches the non-transitory computer-readable medium of claim 8, wherein the instructions to generate the electronic alert further comprise instructions to cause the computer system to: generate the electronic alert to indicate that the target vehicle has different characteristics than the reference vehicle and the target vehicle is potentially disguised to have an appearance of the vehicle type configuration by being constructed using parts other than or in addition to components used to construct a vehicle to have the configuration, wherein the target vehicle is configured to deceptively appear to have the configuration in order to conceal an illicit purpose of the target vehicle [[pg. 1, col. 2] ] metric learning into vehicle re-ID, and adopted novel metrics for measuring the similarity between vehicles in Euclidean spaces … various disguise cases, such as the usage of fake license plates, and alteration of vehicle color or local style, cause misleading cues on visual features for vehicle re-ID, as shown in Fig.1. To handle this problem, we make full use of vehicle front window area (FW for short) to promote the discriminative effect].
It would have been obvious to combine the identification of vehicle type as taught by Moustafa, with modified similarity comparison as taught by Zhu so that disguised vehicles can be recognized (Zhu) [[abstract][pg. 1, col. 2]].
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Moustafa (US 2021/0103747 A1), Jakobsen (US 2022/0092343 A1), Tafazzoli (2017, IEEE Xplore), and Riethmueller (US 2013/0042262 A1) as applied to claim 8 above, and further in view of Yu (US 2018/0358033 A1).
Regarding claim 14, Moustafa does not explicitly teach and yet Yu teaches the non-transitory computer-readable medium of claim 8, wherein the instructions to retrieve the reference acoustic fingerprint further cause the computer system to: accept a user input indicating the vehicle type of the target vehicle; and retrieving the reference acoustic fingerprint assigned to the vehicle type in a library of acoustic outputs from the database [[prior art claims 2 and 12] information processing unit analyzes the entry of current image data to recognize a license plate number or compares the entry of current image data with the entries of default image data stored in the spectrogram and image database to find a related information of the mobile noise source before associating the current sound characteristic information with the mobile noise source in the spectrogram and image database ].
It would have been obvious to combine the identification of vehicle type as taught by Moustafa, with image database and noise source database retrieval as taught by Yu so that a license plate may be read to identify the expected sound (Yu) [prior art claims 2 and 12]].
Allowable Subject Matter
Claims 2, 9, and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the closest prior art Jakobsen (US 2022/0092343 A1) teaches a machine learning algorithm which employs a cumulative reward.
Regarding claim 2, the closet prior art of record does not appear to teach the method of claim 1, wherein the acoustic dissimilarity is determined by: generating a target acoustic fingerprint from the first acoustic output; generating a reference acoustic fingerprint from the second acoustic output; computing a cumulative mean absolute error between the target acoustic fingerprint and the reference acoustic fingerprint, wherein the cumulative mean absolute error represents the acoustic dissimilarity.
Regarding claim 9, the closest prior art of record does not appear to teach the non-transitory computer-readable medium of claim 8, wherein the instructions further cause the computer to: compute a cumulative mean absolute error between the target acoustic fingerprint and the reference acoustic fingerprint, wherein the cumulative mean absolute error represents the acoustic similarity.
Regarding claim 16, the closest prior art of record does not appear to teach the computing system of claim 15, wherein the instructions further cause the computing system to: generate the second acoustic fingerprint from a second acoustic output recorded from the reference vehicle; and compute a cumulative mean absolute error between the first acoustic fingerprint and the second acoustic fingerprint, wherein the cumulative mean absolute error represents the acoustic dissimilarity.
Response to Arguments
Applicant’s arguments, see pg. 12, filed 12/11/2025, with respect to claim 7 have been fully considered and are persuasive. The 112(a) rejection of 9/11/2025 has been withdrawn.
Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive. Regarding element [2] the Examiner disagrees because Moustafa explicitly teaches that the identification circuit identifies a type of vehicle based on the image event and sound event [abstract]. Regarding element [4] the Examiner again disagrees because Jakobsen explicitly states that re-identification systems may also be applied on images of vehicles or animal [0015] using a similarity metric [abstract] to determine if identified images are dissimilar or the same [0047].
Applicant’s arguments, see pgs. 11-13, filed 12/11/2025, with respect to the rejection(s) of claim(s) 7-8 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Riethmueller (US 2013/0042262 A1) and Groot (2010, Thesis).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN D ARMSTRONG whose telephone number is (571)270-7339. The examiner can normally be reached M - F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached on 571-272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN D ARMSTRONG/ Examiner, Art Unit 3645