DETAILED ACTION
It would be of great assistance to the Office if all incoming papers pertaining to a filed application carried the following items:
1. Application number (checked for accuracy, including series code and serial no.).
2. Group art unit number (copied from most recent Office communication).
3. Filing date.
4. Name of the examiner who prepared the most recent Office action.
5. Title of invention.
6. Confirmation number (See MPEP § 503).
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of Specie I in the reply filed on 02/20/2026 is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 11 and 16 – 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nagaya et al. (US 2010/0295839) in view of Stent et al. (US 2019/0147607), Raynor et al. (US 11,270,668) and Rogan et al. (US 2015/0362587)
As to claim 1, Nagaya discloses a system comprising:
a microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]);
a sensor (camera 700 and sensor 710 of fig. 3) coupled to the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]) and comprising a plurality of pixels (camera inherently comprises multiple pixels [0037] [0042]), the sensor being configured to perform a first capture of an image scene comprising a user (camera 700 picks up a user image [0037]),
the first capture comprising, a measurement of a distance from the user of the system (it possible to obtain a wide-angle user image and measure the distance between the user and the image display device [0030]),
subsequent to the first capture provide an estimate of a direction associated with the user (calculating direction of the face [0042]);
and a display (display 200 of fig. 3) coupled to the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]), the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]) being configured to control the display (display 200 of fig. 3), or another circuit coupled to the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]), based on the estimate of the direction associated with the user (displaying in the power savings mode based on direction of the face and power savings flag [0042 – 0045]).
Nagaya fails to disclose
a microcontroller comprising a neural network;
the sensor is a time-of-flight sensor;
the first capture comprising, for each pixel, a measurement of a distance and of a signal value corresponding to a number of photons returning towards the sensor per unit of time
subsequent to the first capture and for each pixel, calculate a standard deviation value associated with the distance, and a standard deviation value associated with the signal value and a confidence value, and provide, to the neural network in association with each pixel, the distance, signal, and standard deviation values associated with the distance and with the signal, and the confidence value, and the neural network being configured to generate, based on the values provided by the sensor, an estimate of a direction associated with the user.
In the same filed of endeavor, Stent disclose a system and a method for gaze tracking (TITLE), wherein the gaze is determined using a microcontroller (132 of fig. 1) comprising a neural network (convolutional neural network 510 of fig. 5) that is used to determine the gaze direction of a subject 180 [0043].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya and the teachings of Stent such that the convolutional neural network as disclosed by Stent was used to determine the gaze direction of the user, with motivation to provide a systems and a methods for determining the gaze direction of a subject from arbitrary viewpoints when the eye of a subject becomes self-occluded from an eye-tracker (Stent, [0005]).
Nagaya in view of Stent fails to disclose
the sensor is a time-of-flight sensor;
the first capture comprising, for each pixel, a measurement of a distance and of a signal value corresponding to a number of photons returning towards the sensor per unit of time
subsequent to the first capture and for each pixel, calculate a standard deviation value associated with the distance, and a standard deviation value associated with the signal value and a confidence value, and provide, to the neural network in association with each pixel, the distance, signal, and standard deviation values associated with the distance and with the signal, and the confidence value, and the neural network being configured to generate, based on the values provided by the sensor, an estimate of a direction associated with the user.
In the same filed of endeavor, Raynor discloses a system and method for detecting screen orientation of a device (TITLE), wherein a sensor is a time-of-flight sensor (col. 5, lines 4 – 16 and col. 6, lines 12 – 48); wherein
the first capture comprising, for each pixel, a measurement of a distance and of a signal value corresponding to a number of photons returning towards the sensor per unit of time (the TOF image sensor 140 may use the time-delay information for each detector to determine the depth information for the respective pixel using at least a single photon, col. 6, lines 44 – 65).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya in view of Stent and the teachings of Raynor such that the sensor used was a time-of-flight sensor as disclosed by Raynor, with motivation to achieve a rapid and accurate detection (Raynor, col. 1, lines 35 – 42).
Nagaya in view of Stent and Raynor fails to disclose that subsequent to the first capture and for each pixel, calculate a standard deviation value associated with the distance, and a standard deviation value associated with the signal value and a confidence value, and provide, to the neural network in association with each pixel, the distance, signal, and standard deviation values associated with the distance and with the signal, and the confidence value, and the neural network being configured to generate, based on the values provided by the sensor, an estimate of a direction associated with the user.
In the same filed of endeavor, Rogan discloses optical system calibration (TITLE), wherein a standard deviation of detected angles was used to determine confidence value (aggregation may enable further evaluation of the detected angle 312; e.g., the standard deviation of the detected angle 312 may enable a measure of the confidence in the detected angle 312 of the lidar sensor 112, and the comparison with the predicted angle 310 may be performed when the standard deviation of the detected angle 312 is within an acceptable confidence range for a statistically significant sample size [0055]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya in view of Stent and Raynor and the teachings of Rogan, such that the that subsequent to the first capture and for each pixel, the standard deviation associated with the distance value was calculated, and a standard deviation value associated with the signal value and a confidence value was provided to the neural network in order to an estimate of a direction associated with the user, with motivation to further correct the detection results (Rogan [0003]).
As to claim 2 (dependent on 1), Nagaya in view of Stent, Raynor and Rogan disclose the system, wherein each pixel of the sensor is further configured to measure a reflectance value (Raynor, detection of reflectance , col. 4, line 64 – col. 5, line 15), and wherein the neural network is configured to generate the estimate of the direction further based on the reflectance values (Stent, neural network based gaze determination [0043].).
As to claim 3 (dependent on 1), Nagaya in view of Stent, Raynor and Rogan disclose the system, further comprising a memory storing a software application (Nagaya, system controller 110 of fig. 3, inherently comprising a memory), and wherein the microcontroller (Nagaya, 210, 720, 730 and 740 of fig. 3 [0035 – 0038]) is further configured to control execution of the software application based on the estimate of the direction generated by the neural network (Nagaya, executing thinned decoding in a low power mode of fig. 9).
As to claim 4 (dependent on 1), Nagaya in view of Stent, Raynor and Rogan disclose system further comprising a backlight unit (BLU) (Nagaya, fig. 6), and wherein the microcontroller (Nagaya, 210, 720, 730 and 740 of fig. 3 [0035 – 0038]) is configured to deactivate the backlight unit based on the estimate of the direction (Nagaya, [0055]).
As to claim 5 (dependent on 1), Nagaya in view of Stent, Raynor and Rogan disclose system, wherein the microcontroller (Nagaya, 210, 720, 730 and 740 of fig. 3 [0035 – 0038]) is configured to control a refresh rate of the display (Nagaya, display 200 of fig. 3) based on the estimate of the direction (Nagaya, [0062]).
As to claim 6 (dependent on 1), Nagaya in view of Stent, Raynor and Rogan disclose system, wherein the ToF sensor (as disclosed by Raynor, col. 5, lines 4 – 16 and col. 6, lines 12 – 48) is configured to perform, subsequent to the first capture, a second capture, a time interval between the first and second captures being determined by the estimate of the direction generated by the neural network, and/or based on an attention value calculated subsequent to the first capture (Nagaya, measuring degree of attention [0037] at a frame rate of the camera [0043] wherein the camera frame rate is controlled based on attentiveness [0100 – 0101]).
As to claim 7 (dependent on 1), Nagaya discloses the system, wherein the microcontroller (Nagaya, 210, 720, 730 and 740 of fig. 3 [0035 – 0038]) is further configured to generate an attention value associated with the user (attentiveness decision [0033]), and wherein the microcontroller (Nagaya, 210, 720, 730 and 740 of fig. 3 [0035 – 0038]) is configured to control the brightness of the display (display 200 of fig. 3) further based on the attention value (Nagaya, adjusting backlight brightness [0055]).
As to claim 8 (dependent on 7), Nagaya in view of Stent, Raynor and Rogan disclose system, wherein the ToF sensor (Raynor, col. 5, lines 4 – 16 and col. 6, lines 12 – 48) is configured to perform, subsequent to the first capture, a second capture, a time interval between the first and second captures being determined based on the attention value calculated subsequent to the first capture (Nagaya, measuring degree of attention [0037] at a frame rate of the camera [0043] wherein the camera frame rate is controlled based on attentiveness [0100 – 0101]).
As to claim 9 (dependent on 1), Nagaya discloses the system, further comprising at least one further display (Nagaya, display 200 of fig. 3), the microcontroller (Nagaya, 210, 720, 730 and 740 of fig. 3 [0035 – 0038]) being further configured to control the brightness of the at least one further display (Nagaya, display 200 of fig. 3) based on the estimate of a direction associated with the user (Nagaya, controlling display brightness by controlling backlight [0055]).
As to claim 10 (dependent on 9), Nagaya discloses the system, wherein the direction is estimated (Nagaya, [0042]), and that the estimate of the direction is a direction describing the orientation of the head of the user, among the North, North-East, North-West, East, West, and South directions, the North direction indicating that the user is facing the display, and the South direction indicating that the user has their back facing the display (Nagaya, detecting face for user image and determining face direction in various degrees at S3003 of fig. 14 [0077 – 0078], wherein various degrees are interpreted to correspond to North, North-East, North-West, East, West, and South directions).
As to claim 11 (dependent on 10), Nagaya discloses the system, wherein the microcontroller is configured to control the decrease of the brightness of the display when, between at least two consecutive captures, the estimate of the direction transitions: from North to North-West or to North-East; and/or - from North to South; and/or - from North-West or North-East to South, and wherein (Nagaya, the controller is configured to decrees brightness based on attentiveness [0055], wherein attentiveness is determined based on face direction as shown in fig. 14, particularly based on face direction facing the sensor [0076 – 0081], wherein face directions relative to the camera smaller than the threshold value is interpreted as North direction); the microcontroller is configured to control the increase of the brightness of the display (display 200 of fig. 3) when, between at least two consecutive captures, the estimate of the direction transitions: from South to North; and/or - from South to North-West or from South to North-East; and/or - from North-West or North-East to North (Nagaya, the controller is configured to increase brightness based on attentiveness [0055], wherein attentiveness is determined based on face direction as shown in fig. 14, particularly based on face direction facing the sensor [0076 – 0081], wherein face directions relative to the camera smaller than the threshold value is interpreted as North direction).
As to claim 16, Nagaya discloses method comprising:
capturing, by a sensor (camera 700 and sensor 710 of fig. 3), an image scene comprising a user of a display (camera 700 picks up a user image [0037]) comprising a plurality of pixels (camera inherently comprises multiple pixels [0037] [0042]),
providing, by the sensor to a microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]), the measurements performed by the pixels (signal from camera 700 of fig. 4),
the microcontroller configured to estimate, based on the provided measurements, a direction describing the orientation of the head of the user (calculating direction of the face [0042]); and
controlling, by the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]), the display (display 200 of fig. 3), or another circuit coupled to the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]), based on the estimate of the orientation of the head (displaying in the power savings mode based on direction of the face and power savings flag [0042 – 0045]).
Nagaya fails to disclose
the microcontroller comprising a neural network;
the sensor is a time-of-flight sensor; and
capturing comprising measuring, by each sensor pixel, a distance value, a signal value, and standard deviation values associated with the distance value and with the signal value, the sensor being further configured to generate, for each pixel, a confidence value.
In the same filed of endeavor, Stent disclose a system and a method for gaze tracking (TITLE), wherein the gaze is determined using a microcontroller (132 of fig. 1) comprising a neural network (convolutional neural network 510 of fig. 5) that is used to determine the gaze direction of a subject 180 [0043].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya and the teachings of Stent such that the convolutional neural network as disclosed by Stent was used to determine the gaze direction of the user, with motivation to provide a systems and a methods for determining the gaze direction of a subject from arbitrary viewpoints when the eye of a subject becomes self-occluded from an eye-tracker (Stent, [0005]).
Nagaya fails to disclose
the sensor is a time-of-flight sensor; and
capturing comprising measuring, by each sensor pixel, a distance value, a signal value, and standard deviation values associated with the distance value and with the signal value, the sensor being further configured to generate, for each pixel, a confidence value.
In the same filed of endeavor, Raynor discloses a system and method for detecting screen orientation of a device (TITLE), wherein a sensor is a time-of-flight sensor (col. 5, lines 4 – 16 and col. 6, lines 12 – 48); wherein for each pixel, a measurement of a distance and of a signal value corresponding to a number of photons returning towards the sensor per unit of time (the TOF image sensor 140 may use the time-delay information for each detector to determine the depth information for the respective pixel using at least a single photon, col. 6, lines 44 – 65).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya in view of Stent and the teachings of Raynor such that the sensor used was a time-of-flight sensor as disclosed by Raynor, with motivation to achieve a rapid and accurate detection (Raynor, col. 1, lines 35 – 42).
Nagaya in view of Stent and Raynor fails to disclose that
capturing comprising measuring, standard deviation values associated with the distance value and with the signal value, the sensor being further configured to generate, for each pixel, a confidence value.
In the same filed of endeavor, Rogan discloses optical system calibration (TITLE), wherein a standard deviation of detected angles was used to determine confidence value (aggregation may enable further evaluation of the detected angle 312; e.g., the standard deviation of the detected angle 312 may enable a measure of the confidence in the detected angle 312 of the lidar sensor 112, and the comparison with the predicted angle 310 may be performed when the standard deviation of the detected angle 312 is within an acceptable confidence range for a statistically significant sample size [0055]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya in view of Stent and Raynor and the teachings of Rogan, such that the that subsequent to the first capture and for each pixel, the standard deviation associated with the distance value was calculated, and a value of the standard deviation associated with the signal value and a confidence value was provided to the neural network in order to an estimate of a direction associated with the user, with motivation to further correct the detection results (Rogan [0003]).
As to claim 17 (dependent on 16), Nagaya in view of Stent, Raynor and Rogan discloses the method, further comprising generating an attention value associated with the user (Nagaya, attentiveness decision [0033]), and controlling the brightness of the display (Nagaya, display 200 of fig. 3) further based on the attention value (Nagaya, adjusting backlight brightness [0055]).
As to claim 18 (dependent on 16), Nagaya in view of Stent, Raynor and Rogan discloses the method, further comprising measuring at each pixel a reflectance value (Raynor, detection of reflectance , col. 4, line 64 – col. 5, line 15), and generating the estimate of the orientation further based on the reflectance values (Stent, neural network based gaze determination [0043]).
As to claim 19 (dependent on 16), Nagaya in view of Stent, Raynor and Rogan discloses the method, further comprising deactivating a backlight unit based on the estimate of the orientation (Nagaya, deactivating backlight [0055]).
As to claim 20 (dependent on 16), Nagaya in view of Stent, Raynor and Rogan, wherein the estimate of the orientation [0042] is a direction describing the orientation of the head of the user, among the North, North-East, North-West, East, West, and South directions, the North direction indicating that the user is facing the display, and the South direction indicating that the user has their back facing the display (Nagaya, detecting face for user image and determining face direction in various degrees at S3003 of fig. 14 [0077 – 0078], wherein various degrees are interpreted to correspond to North, North-East, North-West, East, West, and South directions).
As to claim 21, Nagaya discloses a system comprising:
a microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]);
a sensor (camera 700 and sensor 710 of fig. 3) coupled to the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]) and comprising a plurality of pixels (camera inherently comprises multiple pixels [0037] [0042]),
the sensor being configured to perform a plurality of consecutive captures of image scenes comprising a user (camera 700 picks up a user image [0037], multiple consecutive captures may be taken as shown in fig. 12, wherein image is recaptured after step S01008 of fig. 12),
each capture comprising, for each pixel, a measurement of a distance from the user (it possible to obtain a wide-angle user image and measure the distance between the user and the image display device [0030]);
the microcontroller being further configured to:
subsequent to the capture provide an estimate of a direction associated with the user (calculating direction of the face [0042]); and
store (storage is inherent for image processing) the direction estimates from the plurality of consecutive captures (consecutive frames of image detection as shown the loop in S01002 – S01008 of fig. 12), and generate an attention value associated with the user based on the stored direction estimates from the plurality of consecutive captures (determine that the user is watching the image at step S01002 of fig. 12, wherein attention value corresponds to the positive determination that the user is watching the image);
and a display (display 200 of fig. 3) coupled to the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]), the microcontroller (210, 720, 730 and 740 of fig. 3 [0035 – 0038]) being configured to control brightness of the display based on the attention value (displaying in the power savings mode based on direction of the face and power savings flag [0042 – 0045]).
Nagaya fails to disclose
a microcontroller comprising a neural network;
the sensor is a time-of-flight sensor;
each capture comprising, for each pixel, a measurement of a distance and of a signal value corresponding to a number of photons returning towards the sensor per unit of time
for each capture and for each pixel, calculate a standard deviation value associated with the distance, and a standard deviation value associated with the signal value and a confidence value, and provide, to the neural network in association with each pixel for each capture, the distance, signal, and standard deviation values associated with the distance and with the signal, and the confidence value;
the neural network being configured to generate, based on the values provided by the ToF sensor from each capture, a respective estimate of a direction associated with the user for each capture.
In the same filed of endeavor, Stent disclose a system and a method for gaze tracking (TITLE), wherein the gaze is determined using a microcontroller (132 of fig. 1) comprising a neural network (convolutional neural network 510 of fig. 5) that is used to determine the gaze direction of a subject 180 [0043].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya and the teachings of Stent such that the convolutional neural network as disclosed by Stent was used to determine the gaze direction of the user, with motivation to provide a systems and a methods for determining the gaze direction of a subject from arbitrary viewpoints when the eye of a subject becomes self-occluded from an eye-tracker (Stent, [0005]).
Nagaya in view of Stent fails to disclose
the sensor is a time-of-flight sensor;
each capture comprising, for each pixel, a measurement of a distance and of a signal value corresponding to a number of photons returning towards the sensor per unit of time
for each capture and for each pixel, calculate a standard deviation value associated with the distance, and a standard deviation value associated with the signal value and a confidence value, and provide, to the neural network in association with each pixel for each capture, the distance, signal, and standard deviation values associated with the distance and with the signal, and the confidence value;
the neural network being configured to generate, based on the values provided by the ToF sensor from each capture, a respective estimate of a direction associated with the user for each capture.
In the same filed of endeavor, Raynor discloses a system and method for detecting screen orientation of a device (TITLE), wherein a sensor is a time-of-flight sensor (col. 5, lines 4 – 16 and col. 6, lines 12 – 48); wherein
the capture comprising, for each pixel, a measurement of a distance and of a signal value corresponding to a number of photons returning towards the sensor per unit of time (the TOF image sensor 140 may use the time-delay information for each detector to determine the depth information for the respective pixel using at least a single photon, col. 6, lines 44 – 65).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya in view of Stent and the teachings of Raynor such that the sensor used was a time-of-flight sensor as disclosed by Raynor, with motivation to achieve a rapid and accurate detection (Raynor, col. 1, lines 35 – 42).
Nagaya in view of Stent and Raynor fails to disclose that subsequent to the first capture and for each pixel, calculate a standard deviation value associated with the distance, and a standard deviation value associated with the signal value and a confidence value, and provide, to the neural network in association with each pixel, the distance, signal, and standard deviation values associated with the distance and with the signal, and the confidence value, and the neural network being configured to generate, based on the values provided by the sensor, an estimate of a direction associated with the user.
In the same filed of endeavor, Rogan discloses optical system calibration (TITLE), wherein a standard deviation of detected angles was used to determine confidence value (aggregation may enable further evaluation of the detected angle 312; e.g., the standard deviation of the detected angle 312 may enable a measure of the confidence in the detected angle 312 of the lidar sensor 112, and the comparison with the predicted angle 310 may be performed when the standard deviation of the detected angle 312 is within an acceptable confidence range for a statistically significant sample size [0055]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya in view of Stent and Raynor and the teachings of Rogan, such that the that subsequent to the first capture and for each pixel, the standard deviation associated with the distance value was calculated, and a standard deviation value associated with the signal value and a confidence value was provided to the neural network in order to an estimate of a direction associated with the user, with motivation to further correct the detection results (Rogan [0003]).
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nagaya in view of Stent, Raynor and Rogan in view of Bresch et al. (US 2019/0156204).
As to claim 13 (dependent on 1), Nagaya in view of Stent, Raynor and Rogan disclose system, wherein the neural network comprises: at least one convolutional layer (Stent convolutional neural network 510 of fig. 5); but does not explicitly disclose that the neural network comprises: at least one dense layer.
In the same field of endeavor, Bresch discloses a neural network model (TITLE), wherein the neural network comprises at least one dense layer (danse layer 504 of fig. 5a).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nagaya in view of Stent, Raynor and Rogan and the teachings of Bresch, such that the neural network comprised at least one dense layer as disclosed by Bresch, with motivation to produce a numerical values that can be more easily compared (Bresch [0068]).
Allowable Subject Matter
Claims 12 and 22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
As to claim 12, the Prior Art of record alone or in combination fails to disclose the system according to claim 1, wherein
the confidence value is a Boolean value indicating whether the measurements performed by the pixel allow the user to be detected, the sensor being configured to not provide to the neural network the measurements acquired by a pixel that does not detect the user. (Emphasis Added.)
As to claim 12, the Prior Art of record alone or in combination fails to disclose the system of claim 21, wherein
the microcontroller is configured to:
detect presence of the user;
and when the user is detected, set the attention value at a maximum value when at least one of the following conditions is satisfied:
a same direction was estimated for at least two most recent consecutive captures of the plurality of consecutive captures;
or a particular direction was estimated for at least a threshold number of the plurality of consecutive captures;
or the distance measured by the ToF sensor is less than a reference distance value;
and gradually decrease the attention value from the maximum value when the user is not detected or when none of the conditions is satisfied. (Emphasis Added.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DMITRIY BOLOTIN whose telephone number is (571)270-5873. The examiner can normally be reached M-F 9AM - 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at (571)272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DMITRIY BOLOTIN/ Primary Examiner, Art Unit 2623