DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 11 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the parent’s specification (Application No. 17/129,661) in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant has not pointed out where the new claim is supported, nor does there appear to be a written description of the claim limitation ‘further comprising: in accordance with a determination that the selected portions of the first image and the second image include a plurality of lights, determining a light of the plurality of lights to be the traffic light based on weighting the second image from the second sensor more than the first image from the first sensor’ in the application as filed.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-5, 19 and 20 are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Andreas Wendel et al. [US 20180336692 A1].
Regarding claim 1, Andreas teaches:
1. A method (i.e. The technology relates to camera systems for vehicles having an autonomous driving mode- Abstract…FIG. 8 is an example flow diagram 800 in accordance with some of the aspects described herein. The example flow diagram refers to a system including first and second cameras, such as cameras 300 and 350), comprising:
receiving, from a first sensor coupled to an autonomous vehicle, a first image including a traffic light(i.e. The images captured by the camera themselves and/or information identified from those images may be provided to the computing devices 110 in order to assist the computing devices in making driving decisions for the vehicle. For instance, the status of a traffic signal, for instance solid illuminated or flashing (such as with a flashing yellow light), may be readily determined from the images from camera 350 and used to control how the vehicle responds to the traffic light- ¶0056, fig. 6-7), the first sensor having a fixed exposure configuration including first settings that are fixed regardless of an amount of light being absorbed from a first field of view covered by the first sensor(i.e. The second camera may also be mounted on the vehicle in order to capture images of the vehicle's environment. The second camera has a second exposure time that is greater than or equal to the first exposure time and also has an ND filter. The second exposure time is a fixed (or in some examples, a variable) exposure time.- ¶0057);
receiving, from a second sensor coupled to the autonomous vehicle, a second image, the second sensor having auto-exposure configurations including second settings that are automatically changed based on an amount of light being absorbed from a second field of view covered by the second sensor (i.e. The first camera has a first exposure time and being without an ND filter, where the first exposure is a variable exposure time that is adjusted according to ambient lighting conditions- ¶0057);
determining, a state of the traffic light using the first image and the second image (i.e. Again, to address this issue, images captured by the camera 350 may be processed to identify illuminated objects, and in particularly, those that flicker due to the frequency of the power grid or PWM as discussed above. At the same time, because an illuminated state of such flickering lights can be discerned from a single or very few images captured by a camera configured as camera 350 as demonstrated by image 620 of FIG. 6B, as compared to processing thousands if not tens of thousands of images captured by a camera configured as camera 300, this can save quite a bit of processing power- ¶0055… At block 830, the one or more processors use the images captured using the second camera to identify illuminated objects. At block 840, the one or more processors use the images captured using the first camera to identify the locations of objects- ¶0057); and
causing operation of the autonomous vehicle in accordance with the determined state of the traffic light (i.e. At block 880, the one or more processors use the identified illuminated objects and identified locations of objects to control the vehicle in an autonomous driving mode- ¶0057).
Regarding claim 2, Andreas teaches all the limitations of claim 1 and Andreas further teaches:
wherein the first settings comprise shutter speed, aperture, and ISO settings of the first sensor that are fixed to achieve a predetermined image exposure of the traffic light (i.e. As an alternative to camera 350's configuration with an ND filter, the aperture and/or lense of the camera 350 may be reduced. For instance, camera 300 may have an f/2 aperture and no ND filter (where f refers to focal length). However, instead of camera 350 having an f/2 aperture and ND filter, such as an ND64 filter, camera 350 may have an f/16 aperture. The f/16 aperture has an 8 times smaller diameter (or 16/2), which is 64 times (or 8̂2) less area and thus allows for 64 times less light transmission. Accordingly, this would act similarly to the ND64 filter- ¶0047).
Regarding claim 3, Andreas teaches all the limitations of claim 1 and Andreas further teaches:
wherein the second settings comprises shutter speed, aperture, and ISO settings of the second sensor that are configured to be automatically changed to achieve a predetermined exposure for the amount of light being absorbed from the second field of view covered by the second sensor (i.e. As an alternative to camera 350's configuration with an ND filter, the aperture and/or lense of the camera 350 may be reduced. For instance, camera 300 may have an f/2 aperture and no ND filter (where f refers to focal length). However, instead of camera 350 having an f/2 aperture and ND filter, such as an ND64 filter, camera 350 may have an f/16 aperture. The f/16 aperture has an 8 times smaller diameter (or 16/2), which is 64 times (or 8̂2) less area and thus allows for 64 times less light transmission. Accordingly, this would act similarly to the ND64 filter- ¶0047).
Regarding claim 4, Andreas teaches all the limitations of claim 1 and Andreas further teaches:
wherein the first settings comprise fixed values tuned for detecting an average illumination intensity of the traffic light (i.e. In some instances, the camera 300 may be used to capture both “light” exposure images and “dark” exposure images in order to allow the perception system 172 and/or the computing devices 110 to identify both non-emissive (using the light exposure image) and light emissive objects dark exposure image. To do so, a first image is processed by the controller 302 to determine an exposure value for capturing the average amount of light (within a predetermined range) in the environment, for instance using a logarithmic control for shutter time and a linear control for the gain value. This exposure value is then used to capture the light exposure image. A fixed offset value may then be added (or used to multiply) to one or more camera settings such as shutter time and gain in order to use the same camera to capture the dark exposure image- ¶0037).
Regarding claim 5, Andreas teaches all the limitations of claim 1 and Andreas further teaches:
wherein the first field of view of the first sensor is different than and overlaps with the second field of view of the second sensor, wherein a first focal length of the first sensor is different than a second focal length of the second sensor (i.e. (i.e. As an alternative to camera 350's configuration with an ND filter, the aperture and/or lense of the camera 350 may be reduced. For instance, camera 300 may have an f/2 aperture and no ND filter (where f refers to focal length). However, instead of camera 350 having an f/2 aperture and ND filter, such as an ND64 filter, camera 350 may have an f/16 aperture. The f/16 aperture has an 8 times smaller diameter (or 16/2), which is 64 times (or 8̂2) less area and thus allows for 64 times less light transmission. Accordingly, this would act similarly to the ND64 filter- ¶0047).
Regarding claim 19, computer-readable medium storing instructions claim 19 corresponds to the same method as claimed in claim 1, and therefore is also rejected for the same rationale as listed above.
Regarding claim 20, apparatus claim 20 is drawn to the apparatus using/performing the same method as claimed in claim 1. Therefore, apparatus claim 16 corresponds to method claim 8, and is rejected for the same rationale as used above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 6, 7, 9, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Andreas Wendel et al. [US 20180336692 A1] in view of Andreas et al. Wendel [US 20190208111 A1 (hereafter Andreas’)].
Regarding claim 6, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
wherein the second sensor is positioned forward of and below the first sensor relative to the autonomous vehicle.
In the same field of endeavor, Andreas’ teaches:
wherein the second sensor is positioned forward of and below the first sensor relative to the autonomous vehicle (i.e. see figs. 4A and 8A).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings Andreas’ to improve object identification and avoidance by having two or more images that span a wider dynamic range than any single image (Andreas’- ¶0005).
Regarding claim 7, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
further comprising: identifying a moving object within the second image.
In the same field of endeavor, Andreas’ teaches:
further comprising: identifying a moving object within the second image (i.e. a fast moving object (e.g., a car traveling at 70 miles-per-hour, mph) may only be identifiable using an image captured by an image sensor with a sufficiently short exposure duration because image blur will occur in an image sensor with a longer exposure duration- ¶0033… in some embodiments, because the first image sensor 410 and the second image sensor 420 may each be configured to capture objects of specific types (e.g., fast-moving vs. slow-moving objects or actively illuminated vs. passively illuminated objects), the hardware of a given image sensor may be specialized for a given range of luminance levels and/or variations over time. For example, the second image sensor 420 may have hardware specialized for detecting passively illuminated objects (e.g., specialized hardware in addition to the neutral-density filter 422, such as different lenses than the first image sensor 410, one or more additional filters compared to the first image sensor 410, etc.)- ¶0136).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings Andreas’ to improve object identification and avoidance by having two or more images that span a wider dynamic range than any single image (Andreas’- ¶0005).
Regarding claim 9, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
further comprising: receiving, from a third sensor coupled to the autonomous vehicle, a third image, the third sensor having an auto-exposure configuration including third settings that are automatically changed based on an amount of light being absorbed from a third field of view covered by the third sensor, wherein determining the state of the traffic light comprises determining the state of the traffic light using the first image, the second image, and the third image, wherein at least half of the second field of view of the second sensor is positioned outside of the third field of view of the third sensor.
In the same field of endeavor, Andreas’ teaches:
further comprising: receiving, from a third sensor coupled to the autonomous vehicle, a third image, the third sensor having an auto-exposure configuration including third settings that are automatically changed based on an amount of light being absorbed from a third field of view covered by the third sensor, wherein determining the state of the traffic light comprises determining the state of the traffic light using the first image, the second image, and the third image, wherein at least half of the second field of view of the second sensor is positioned outside of the third field of view of the third sensor (i.e. In embodiments having three image sensors, the third image sensor may have a variable exposure level that is different from the variable exposure level of the first image sensor (e.g., an auto-exposure setting on the third image sensor may be determined by a camera controller such that the exposure level of the third image sensor is different than the first image sensor, and the camera controller may manipulate a shutter speed/exposure duration, an aperture size, and/or an ISO sensitivity to achieve the determined exposure level of the third image sensor). For example, the exposure duration of the third image sensor may be higher than the exposure duration of the first image sensor so the third image sensor is more sensitive to low luminance objects in the scene- ¶0036).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings Andreas’ to improve object identification and avoidance by having two or more images that span a wider dynamic range than any single image (Andreas’- ¶0005).
Regarding claim 14, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
further comprising: receiving, from a third sensor coupled to the autonomous vehicle, a third image, the third image including one or more traffic lights outside of the second field of view of the second sensor; and determining a state of the one or more traffic lights using on the third image.
In the same field of endeavor, Andreas’ teaches:
further comprising: receiving, from a third sensor coupled to the autonomous vehicle, a third image, the third image including one or more traffic lights outside of the second field of view of the second sensor; and determining a state of the one or more traffic lights using on the third image(i.e. FIG. 9A illustrates a camera system 900, according to example embodiments- ¶0184... the first perspective 902 and the second perspective 904 may include different objects than one another (e.g., the first perspective 902 may contain a tree while the second perspective 904 contains a stop sign). Additionally or alternatively, the first perspective 902 may contain actively illuminated objects and/or passively illuminated objects. Similarly, the second perspective 904 may contain actively illuminated objects and/or passively illuminated objects. Using the camera system 900, objects in multiple directions relative to the vehicle 200 (i.e., multiple perspectives based on the field of view of each of the image sensors of the camera system 900) may be captured and identified- ¶0185-0193).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings Andreas’ to improve object identification and avoidance by having two or more images that span a wider dynamic range than any single image (Andreas’- ¶0005).
Regarding claim 18, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
wherein determining state of the traffic light further comprises using a neural network to determine the state of the traffic light.
In the same field of endeavor, Andreas’ teaches:
wherein determining state of the traffic light further comprises using a neural network to determine the state of the traffic light (i.e. the processor may use a machine-learned model (e.g., deep convolutional neural network) to perform object identification in each of the two images, either individually or in combination. In order to save computation time, the processor may only attempt to identify actively illuminated objects (e.g., tail lights, traffic lights, light-emitting diode (LED) road signs, etc.) in the darker image (e.g., the second image arising from the second image sensor that has a corresponding neutral-density filter) and may only attempt to identify passively illuminated objects (e.g., objects illuminated by reflecting or refracting light, such as pedestrians, trees, stop signs, animals, etc.), objects illuminated from ambient light, and/or non-illuminated objects in the brighter image (e.g., the first image arising from the first image sensor)- ¶0039).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings Andreas’ to improve object identification and avoidance by having two or more images that span a wider dynamic range than any single image (Andreas’- ¶0005).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Andreas Wendel et al. [US 20180336692 A1] in view of David Ian Franklin Ferguson et al. [US 9690297 B1].
Regarding claim 8, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
wherein causing operation of the autonomous vehicle in accordance with the determined state of the traffic light further comprises causing the autonomous vehicle to stop in accordance with the determined state of the traffic light being a red circle or a red arrow.
In the same field of endeavor, David teaches:
wherein causing operation of the autonomous vehicle in accordance with the determined state of the traffic light further comprises causing the autonomous vehicle to stop in accordance with the determined state of the traffic light being a red circle or a red arrow (i.e. In some example scenarios, the autonomous vehicle 302 is stopped at an intersection, such as when the illuminated component of the traffic light is a color, such as red, that signals a vehicle to stop- Col 12, line 7-11).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings of David to reduce the possible uncertainty of the state of the traffic light (David- Col 15, line 44-47).
Claims 10, 12, 13, 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Andreas Wendel et al. [US 20180336692 A1] in view of David Ian Franklin Ferguson et al. David I. Ferguson et al. [US 20130253754 A1].
Regarding claim 10, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
further comprising: selecting portions of the first image and the second image that include the traffic light, wherein determining the state of the traffic light comprises analyzing the selected portions of the first image and the second image.
In the same field of endeavor, David teaches:
further comprising: selecting portions of the first image and the second image that include the traffic light (i.e. the predicted position may be an axis-aligned bounding box which selects a portion of the image- ¶0039), wherein determining the state of the traffic light comprises analyzing the selected portions of the first image and the second image (i.e. The portion of the image within the predicted position may then be analyzed to determine the state of the traffic signal. For example, the predicted position may be processed to identify brightly colored red or green objects, and the predicted position may change as the position of the vehicle approaches the traffic signal- ¶0039).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings of David to facilitate detection of the traffic signal by scanning the larger target area (David- ¶0063).
Regarding claim 12, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
wherein determining the state of the traffic light comprises distinguishing a first light emitted by the traffic light from a second light emitted by another light source by referencing an expected location of the first light within the first image and the second image.
In the same field of endeavor, David teaches:
wherein determining the state of the traffic light comprises distinguishing a first light emitted by the traffic light from a second light emitted by another light source by referencing an expected location of the first light within the first image and the second image (i.e. a confidence in a detected traffic signal and an associated state of the traffic signal may be determined based on a scenario in which the traffic signal is detected. For example, a vehicle may be more confident in a detected traffic signal in locations in which the vehicle expects to detect traffic signals and less confident in a detected traffic signal in locations in which the vehicle does not expect to detect traffic signals. Accordingly, in some examples, control of the vehicle may be modified based on a confidence in a traffic signal- ¶0019… The portion of the image within the predicted position may then be analyzed to determine the state of the traffic signal. For example, the predicted position may be processed to identify brightly colored red or green objects, and the predicted position may change as the position of the vehicle approaches the traffic signal- ¶0039).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings of David to facilitate detection of the traffic signal by scanning the larger target area (David- ¶0063).
Regarding claim 13, Andreas and David teach all the limitations of claim 12 and Andreas further teaches:
wherein determining the state of the traffic light comprises analyzing a first portion of the first image and a second portion of the second image, wherein the first light and the second light are visible in the first portion of the first image and the second portion of the second image (i.e. Using an ND filter allows for a longer exposure time by filtering out additional light. In other words, the exposure time of camera 350 may be much greater than camera 300 while still capturing useful images of objects. As an example, the exposure time can be on the order of milliseconds, such as for instance 1 to 20 milliseconds or times therebetween, such as at least 5 or 10 milliseconds. FIGS. 7A and 7B demonstrate the same image of a pair of traffic signal lights 730, 732 that are both illuminated in the color green using a longer exposure time, for instance, on the order of milliseconds. Image 710 of FIG. 7A is captured without an ND filter while image 720 of FIG. 7B is captured with an ND filter, for instance using a camera configured similarly to camera 350. Each of the traffic signal lights 730, 732 is identified by a white circle for ease of understanding, although these circles are not part of the images themselves. As can be seen, the images 710 and 720 demonstrate how the use of the ND filter eliminates most of the other information (or other light), allowing the viewer, and the vehicle's perception system 172, to pick out the illuminated traffic lights more readily. Although not visible from black and white images, the ND filter also preserves the light's color. For example, in image 710, the traffic signal lights appear white with green halo, while in image 720, the traffic signal lights appear as green circles- ¶0043).
Regarding claim 15, Andreas teaches all the limitations of claim 1.
However, Andreas does not teach explicitly:
wherein each of the first sensor and the second sensor includes fixed focus optics and a digital sensor configured to detect visible light.
In the same field of endeavor, David teaches:
wherein each of the first sensor and the second sensor includes fixed focus optics and a digital sensor configured to detect visible light (i.e. a camera with a fixed lens with a 30 degree field of view- ¶0026…To this end, the camera 610 may be configured to detect visible light- ¶0073).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings of David to facilitate detection of the traffic signal by scanning the larger target area (David- ¶0063).
Regarding claim 17, Andreas teaches all the limitations of claim 15.
However, Andreas does not teach explicitly:
wherein each of the first sensor and the second sensor is forward facing with respect to the autonomous vehicle with a corresponding field of view covering an angle of at least 30 degrees centered on a forward end of the autonomous vehicle.
In the same field of endeavor, David teaches:
wherein each of the first sensor and the second sensor is forward facing with respect to the autonomous vehicle with a corresponding field of view covering an angle of at least 30 degrees centered on a forward end of the autonomous vehicle (i.e. a camera with a fixed lens with a 30 degree field of view- ¶0026…To this end, the camera 610 may be configured to detect visible light- ¶0073).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings of David to facilitate detection of the traffic signal by scanning the larger target area (David- ¶0063).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Andreas Wendel et al. [US 20180336692 A1] in view of David Ian Franklin Ferguson et al. Elad Toledano et al. [US 20190353784 A1].
Regarding claim 16, Andreas teaches all the limitations of claim 15.
However, Andreas does not teach explicitly:
wherein the first sensor and the second sensor are respectively incorporated into a first video camera and a second video camera, wherein each of the first video camera and the second video camera is configured to capture imagery at a rate of at least 24 frames per second.
In the same field of endeavor, Elad teaches:
wherein the first sensor and the second sensor are respectively incorporated into a first video camera and a second video camera, wherein each of the first video camera and the second video camera is configured to capture imagery at a rate of at least 24 frames per second. (i.e. In one example process, processor 110 may analyze at least one image from an onboard camera to detect a representation of an object (e.g., an object fixed in world coordinates) in the at least one image. Based on the LIDAR output aligned with the camera output, a distance to the detected object may be determined. By monitoring how the distance information from the LIDAR output changes over time (e.g., at the image capture rate of the camera, which may be 24 fps, 30 fps, etc.)- ¶0187).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to modify the teachings of Andreas with the teachings of Elad to delegate control to the driver of vehicle 200 in order to improve safety conditions (Elad- ¶0130).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLIFFORD HILAIRE whose telephone number is (571)272-8397. The examiner can normally be reached 5:30-1400.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V PERUNGAVOOR can be reached at (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CLIFFORD HILAIRE
Primary Examiner
Art Unit 2488
/CLIFFORD HILAIRE/Primary Examiner, Art Unit 2488