Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 9-11, 17-19, are rejected under 35 U.S.C. 103 as being unpatentable over Malik (Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction) in view of Senegas (WO 2018069419 A1).
Regarding claims 1, 17,
Malik teaches a computing system, comprising: a plurality of sensors(Malik 1 Hardware SPAD Photon Sensors); an illuminator (Malik 3.1 Laser); at least one memory storing instructions; and at least one processor in communication with the at least one memory, wherein the at least one processor is configured to execute the stored instructions to (Malik 3.4 Implementation Details “We train the network on a single NVIDIA A40 GPU” 4.1 Dataset “To decrease the memory required for training, we crop and downsample the histograms” Note: Here Malik teaches its computing system uses a processor, specifically a named graphics processor, and memory.): initiate emission of a light pulse by the illuminator, (Malik 3.1 Image Formation Model “Consider that a laser pulse illuminates a point in a scene that is imaged onto a sensor at position p ∈ R2 (see Fig. 2) Assume light from the laser pulse propagates to a surface and back to p along the same path described by a ray r(t), where t indicates propagation time.” Note: Here Malik teaches the presence of an illuminator which in Malik is a laser pulse which is capable of emitting a light pulse.) the illuminator has an associated illuminator profile including a plurality of learnable parameters; (Malik 3.2 Time-Resolved Volume Rendering “We denote by c the radiance of light scattered at a point r(t) in the direction ω, and σ represents the volume density or the differential probability of ray termination at r(t).” 3.3 Reconstruction “To reconstruct transient NeRFs, we use lidar measurements τ~ (k) [i, j, n] of a scene captured from 0 ≤ k ≤ K − 1 different viewpoints. We parameterize transient NeRF using a neural network F consisting of a hash grid of features and a multi-layer perceptron decoder [30]. The network takes as input a coordinate and viewing direction, and outputs radiance and density, F(r(t), ω) = c, σ. We use these outputs to render transients (see Fig. 2). The model is optimized to minimize the difference between the rendered transient and measured photon count histograms.” 4.4 Ablation Studies “we include quantitative results of our method without the space carving loss (w/o SC), without accounting for the laser profile” Note: Malik teaches that a parameter c denotes the radiance of light created by the illuminator and σ denotes volume density. Both parameters are output by a neural radiance field and are made using information about the light the illuminator pulses. The neural radiance field uses creates these parameters to render its final output with a higher accuracy. A learnable parameter is a parameter which a machine learning program learns to help it better optimize its prediction, this fits the definition of Malik’s radiance and volume density parameters exactly as the model learns to generate them for use in optimizing its rendering. Malik here also teaches an “illuminator profile”, profiling in machine learning refers to examining datasets to understand their importance and characteristics. Since we measure data on the light our illuminator outputs and use this to derive further data, in this case the radiance and density parameters, Malik teaches an “illuminator profile”. We know this is profiling as in the last citation it explicitly refers to this as the “laser profile”, and as the laser is Malik’s illuminator this explicitly teaches an illuminator profile.) after a predetermined delay from emission of the light pulse, initiate capturing of a plurality of pixels in a scene using the plurality of sensors;(Malik 1. Hardware Prototype “Photons detected by the SPAD are correlated with a sync signal from the laser using a time-correlated single photon counter (TCSPC) to measure the photon arrival timestamps.” 2 Related Work “including methods for imaging with single-photon sensors” 3.1 Image Formation Model “The term 2z/c gives the time delay for light to propagate to a point at distance z and back … we can describe the measured transient, or the number of photon detections captured by a SPAD … where N indicates the number of laser pulses per pixel” Note: Malik teaches the illuminator, the laser, is synced with the SPAD, the sensor, and that a specific delay is present as described by a variable. These individual timed sensor captures with illumination pulses produce pixels. It is also explicitly stated that there can be “sensors” plural teaching a plurality of sensors.) based on the plurality of captured pixels, for a point in the scene, compute a respective value for volumetric density, normal, reflectance and ambient light using a corresponding neural field;(Malik 3 Transient Neural Radiance Fields “We describe a mathematical model for transient measurements captured using single-photon lidar and propose a time-resolved volume rendering formulation compatible with neural radiance fields … Rendering transient neural radiance fields. We cast rays through a volume and retrieve the density and color at each point using a neural representation [30].” 2 Active single-photon imaging “In active imaging scenarios, pulsed light sources are paired with single-photon sensors to estimate the depth or reflectance of a scene by applying computational algorithms to the captured photon timestamps [19, 35, 36]. The extreme temporal resolution of these sensors also enables direct capture of interactions of light with a scene at picosecond timescales” 2 Algorithm 1: Extrinsics Calibration Data: “Fit planes and determine initial center and axis of rotation to align the surface normals.” Note: Malik teaches that from a volume we can obtain density, or volumetric density, using a neural field with data captured from our sensors which includes pixels. Malik also teaches the computing of reflectance, light, and the interactions of light in a scene, which necessarily encompasses specific lighting categories like ambient light. Malik also teaches the computation of a normal, or surface normal, which simply refers to any vector perpendicular to a surface for a given point.) based upon the emitted light pulse, compute a shadow component corresponding to an origin of the illuminator and a direction of the emitted light pulse. While Malik does not explicitly teach the computation of a shadow component it teaches a similar method of computing dark parts of surfaces based on the origin of the illuminator and direction of the emitted light pulse. (Malik 3.3 Space carving regularization “We find that using the above loss function alone results in spurious patches of density in front of dark surfaces in a scene” 3.1 Image Formation Model “Assume light from the laser pulse propagates to a surface and back to p along the same path described by a ray r(t), where t indicates propagation time. The forward path along the ray is given as r(t) = x(p) + tc ω(p), where x(p) ∈ R 3 is the ray origin, ω(p) ∈ S 2 is the ray direction which maps to p, and c is the speed of light” Note: Malik teaches that dark spaces are computed on a surface, and it is specified that for all surfaces a ray which has a direction originating from the illuminator is mapped, teaching the computing of a component corresponding to an origin of the illuminator and a direction of the emitted light pulse. While shadows could be encompassed in Malik’s “dark surfaces” we do not say Malik teaches that a shadow component is computed); and using the computed respective value for volumetric density, normal, reflectance and ambient light, construct a gated image through a volume rendering formulation. (Malik 1. Hardware Prototype “Photons detected by the SPAD are correlated with a sync signal from the laser using a time-correlated single photon counter (TCSPC) to measure the photon arrival timestamps.” 3.1 Image Formation Model “The term 2z/c gives the time delay for light to propagate to a point at distance z and back … we can describe the measured transient, or the number of photon detections captured by a SPAD … where N indicates the number of laser pulses per pixel” 1 Introduction Figure 1: “Overview of transient neural radiance fields (Transient NeRFs). Measurements from a single-photon lidar are captured using a single-photon avalanche diode (SPAD), pulsed laser, scanning mirrors, and a time-correlated single photon counter (TCSPC). The lidar scans, consisting of a 2D array of photon count histograms (visualized with maximum-intensity projection), are captured from multiple viewpoints and used to optimize the transient NeRF. After training, we render novel views of time-resolved lidar measurements” Note: Gated imaging is a method of image capture with a sensor that is synced with a pulsed illumination source, often a laser. This method is exactly what is taught in Malik in which the image capture sensor that detects photons, or SPAD, is specifically time synced with a laser pulse that provides illumination. This information is further stated to be used by a neural field for purposes of creating a rendering of the view. The list of values described by the claim (“volumetric density, normal, reflectance and ambient light”) is the same list form a previous portion of the claim. As such they have already been shown to be calculated and considered by our neural field in Malik, and as the neural field exists to produce its rendering/gated image, it has already been shown that these computed values are used by the neural field to produce a gated image.)
As mentioned previously Malik does not explicitly teach the computing a shadow component. The computation and detection of shadow components of objects used in image capture is taught in Senegas which teaches for a point in the scene, compute a shadow component; and using the computed respective value for the computed shadow component construct a gated image (Senegas Col. 6 Line 11 “A video camera 40 is arranged to acquire video of at least a portion of the projected shadow S including edge E. … This processing is diagrammatically indicated in FIGURE 1 as a shadow edge detector 44, which can operate on a per-frame basis using any suitable edge-detection algorithm, e.g. by computing a gradient image and detecting a high gradient line corresponding to the shadow edge E; or detecting a transition line separating lower intensity shadow S and the higher intensity region illuminated by the unoccluded beam B; or so forth. Since the shadow edge E is generally a line (possibly curved), the position of the shadow edge in a given frame acquired at time t (as indicated by the frame time stamp t) can be represented as a scalar value y(t) representing the shadow edge versus time 46, … From this function 46, the respiration rate 50 can be readily derived as the peak-to-peak time interval of y( t). By analyzing the waveform shape of the function y(t) the overall respiratory cycle can be obtained … this provides a respiratory gating signal 52 … these data 52, 54 may optionally be used as inputs to the imaging device 8 to perform gated imaging” Note: Here Senegas teaches that shadows in the images it captures are present, and specific “shadow components”, or individual shadows themselves and the boundaries that define them, are detected and computed. Senegas also teaches that the gated imaging, which produces gated images, is performed leveraging data of respiratory gating signaling 52 which it derives from respiration rate 50 which is itself derived from what is learned by the edges of shadows 46. This directly teaches the use of shadow components in constructing gated images.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Malik with Senegas’s teachings where: among many other lighting and surface qualities that are computed/detected by Malik a shadow component is also computed from the illuminator origin and light pulse direction, and the shadow component is used alongside other information like normal, reflectance, and ambient lighting to construct a gated image.
There are several reasons that would motivate one to do so, one of which is to construct a more detailed gated image by considering the shadow components. As Malik already considers lighting, its interactions, and reflectance, it would be obvious to additionally include shadow components when making a gated image and to compute it along side the other components from the lights direction and origin.
Regarding claim 9,
Malik teaches:
A vehicle, comprising: a plurality of sensors; an illuminator; at least one memory storing instructions; and at least one processor in communication with the at least one memory, wherein the at least one processor is configured to execute the stored instructions to (Malik 6 Discussion “The proposed framework and the ability to render transient measurements from novel views may be especially relevant for realistic simulation for autonomous vehicle navigation, multiview remote sensing, and view synthesis of more general transient phenomena.” Note: The described components that comprise the vehicle in claim 9 have already been shown to be taught by Malik in claim 1. Here Malik details that its teachings are especially relevant for use in a vehicle, teaching a vehicle comprising all of Malik’s components and teachings.) As the further listed body text of claim 9 is identical to claims 1 and 17 it is rejected under the same rationale. Thus, the below listed claims which depend on claim 9 are rejected alongside claims 1 and 17’s dependent claims as they contain identical body text other than referring to the vehicle of claim 9 which has been shown to be taught.
Regarding claims 2, 10, 18, dependent on 1, 9, 17,
Malik teaches:
The computing system of claim 1, wherein the volume rendering formulation comprises computing pixel intensity contribution of the point along a ray of the emitted light pulse based at least in part upon accumulated transmittance through the capturing and the respective value for the volumetric density. (Malik 3 Transient Neural Radiance Fields “We describe a mathematical model for transient measurements captured using single-photon lidar and propose a time-resolved volume rendering formulation compatible with neural radiance fields … Rendering transient neural radiance fields. We cast rays through a volume and retrieve the density and color at each point using a neural representation [30].” 3.1 Image Formation Model “Consider that a laser pulse illuminates a point in a scene that is imaged onto a sensor at position p ∈ R 2 (see Fig. 2). Assume light from the laser pulse propagates to a surface and back to p along the same path described by a ray r(t), where t indicates propagation time … The term 2z/c gives the time delay for light to propagate to a point at distance z and back … we can describe the measured transient, or the number of photon detections captured by a SPAD … where N indicates the number of laser pulses per pixel … The resulting measurements τe[i, j, n] represent a noisy histogram of photon counts collected at pixel [i, j] at time bin n.” 2 Active single-photon imaging “SPADs or avalanche photodiodes capture photon count histograms or time-resolved intensity” Note: Here we see Malik teaches that for each pixel at its specific location the number of photon counts that make up that pixel in a specified period of time are stored in a histogram. This count of how many photons represent a collected pixel is intensity, which is explicitly stated in the second citation. The neural field uses the photon data, which is stored in histograms associating intensity to pixels, is what generates a final volume rendering, teaching that pixel intensity is used in a volume rendering formulation. Malik also teaches its photons record points hit by the illuminator pulse which has its path recorded as a ray, teaching the pixels are points on a ray of an emitted light pulse. The last portion of the claim relating to volumetric density being leveraged in volumetric rendering is taught as well as Malik teaches that in its rendering process volumetric density for points is found and used in making the final render, which is also discussed in claim 1.)
Regarding claims 3, 11, 19, dependent on 1, 9, 17,
Malik teaches the volume rendering formulation comprises computing pixel intensity contribution of the point along a ray of the emitted light pulse (Malik 3 Transient Neural Radiance Fields and 3.1 Image Formation Model, cited and explained in the previous claim) based at least in part upon a distance and a relative position of the point corresponding to the illuminator. (Malik 3.1 Image Formation Model “Consider that a laser pulse illuminates a point in a scene that is imaged onto a sensor at position p ∈ R2 (see Fig. 2) Assume light from the laser pulse propagates to a surface and back to p along the same path described by a ray r(t), where t indicates propagation time.” Introduction Figure 1: “Overview of transient neural radiance fields (Transient NeRFs). Measurements from a single-photon lidar are captured using a single-photon avalanche diode (SPAD), pulsed laser, scanning mirrors, and a time-correlated single photon counter (TCSPC). The lidar scans, consisting of a 2D array of photon count histograms (visualized with maximum-intensity projection), are captured from multiple viewpoints and used to optimize the transient NeRF. After training, we render novel views of time-resolved lidar measurements” Note: Malik teaches that the sensor data which is used to compose the volume rendering is output along a ray that represents the path the illuminator/laser traveled to, showing that the distance and relative position of a point corresponding to the illuminator are core parts necessary for rendering and computing pixel intensity as both are derived from sensor data.)
Claims 8, 16, are rejected under 35 U.S.C. 103 as being unpatentable over Malik (Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction) in view of Senegas (WO 2018069419 A1) and further in view of Grauer (US 20150296200 A1).
Regarding claim 8, 16, dependent on 1, 9,
Malik teaches:
The computing system of claim 1,
While Malik details a gated imaging process with gated cameras/sensors it does not teach the use of stereo gated cameras, this is taught in Grauer which teaches:
wherein the plurality of sensors includes stereo gated cameras or stereo RGB cameras. ( Grauer ¶21 “FIG. 1 and FIG. 2 illustrate a vehicle mounted stereo gated imaging and ranging system 60 which may include at least a single gated (pulsed) light source 10 in the non-visible spectrum (e.g. NIR by a LED and/or laser source) in order to illuminate, for example, the environment in front 50 of the vehicle 20. Furthermore, stereo gated imaging and ranging system may also include at least two cameras/sensors 40 whereas at least one camera/sensor is adapted for image gating. Stereo gated imaging cameras may be located internally in the vehicle”)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Malik with Grauer where Malik’s gated imaging system employs a stereo gated camera to do so.
There are several reasons that would motivate one to do so, one is to increase depth accuracy by leveraging stereo gated cameras. As depth is already said to be measured in Malik a more accurate measurement could be gained through stereo gated cameras.
Claims 4, 5, 12, 13, are rejected under 35 U.S.C. 103 as being unpatentable over Malik (Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction) in view of Senegas (WO 2018069419 A1) and further in view of Madison (US 20200025639 A1).
Regarding claims 4, 12, dependent on 1, 9,
Malik teaches the computing system of claim 1, wherein the volumetric density is normalized. (Malik 3 Transient Neural Radiance Fields “We describe a mathematical model for transient measurements captured using single-photon lidar and propose a time-resolved volume rendering formulation compatible with neural radiance fields … Rendering transient neural radiance fields. We cast rays through a volume and retrieve the density and color at each point using a neural representation [30].” Malik 3.2 Time-Resolved Volume Rendering “We denote by c the radiance of light scattered at a point r(t) in the direction ω, and σ represents the volume density or the differential probability of ray termination at r(t).” 4.3 Intermediate Rendering Results “Rendered transients, densities, and radiance plotted versus bin number for rays represented in the rendered image of the hotdog scene trained on three views. We normalize the densities and radiance values for visualiation and plot the unnormalized transients.” Note: Here Malik teaches that the densities and radiance values are normalized. As stated, these values come from rays, and as seen in the first and second listed citations are further specified to be volumetric densities specifically showing that Malik teaches the normalization of volumetric density.)
While Malik teaches the normalization of volumetric density it does not teach leveraging depth loss for normalization.
Doing so is taught in Madison which teaches normalization with a depth loss (“FIG. 2D shows a number of curves where the loss rates Γ.sub.loss are normalized by dividing them by their extrapolated zero-trap-depth loss rates” Note: Here Madison teaches that normalization can be accomplished with depth loss.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Malik with Maddison where volumetric density is normalized specifically by leveraging depth loss.
There are several reasons that would motivate one to do so, as depth loss describes the error/loss between the measured and actual depth for points we could get more realistic, less error prone data on things like volumetric density by acknowledging some data taken at a great distance or in poor conditions may have a higher depth loss and should be normalized.
Regarding claim 5, 13, dependent on 1, 9,
Malik teaches:
The computing system of claim 1, wherein each of the normal, reflectance and ambient light is regularized or normalized. (Malik 3.3 Reconstruction “To reconstruct transient NeRFs, we use lidar measurements τe (k) [i, j, n] of a scene captured from 0 ≤ k ≤ K − 1 different viewpoints. … The network takes as input a coordinate and viewing direction, and outputs radiance and density, F(r(t), ω) = c, σ. We use these outputs to render transients (see Fig. 2). The model is optimized to minimize the difference between the rendered transient and measured photon count histograms. We also introduce a modified loss function to account for the high dynamic range of lidar measurements,” 2 Active-single photon imaging “In active imaging scenarios, pulsed light sources are paired with single-photon sensors to estimate the depth or reflectance of a scene by applying computational algorithms to the captured photon timestamps [19, 35, 36]. The extreme temporal resolution of these sensors also enables direct capture of interactions of light with a scene at picosecond timescales” 2 Intrinsics “We convert the lidar scans to a point cloud using the raxel model and lidar time of flight, and then a coarse solution is obtained by fitting planes to the point clouds and finding the center and axis of rotation that align the plane normals.” 4 Captured dataset “We bin the photon counts into histograms with 1500 bins …Prior to input into the network for training, we normalize the measurement values by the maximum photon count observed across all view” Note: Malik explicitly states that reflectance, the total light of a scene and its interactions which encompasses ambient light, and normal are measurements obtained from its photon sensors. It is then in the last citation explicitly stated that the measurement values our sensors obtains are normalized.)
While Malik makes mentions a loss function being applied to its sensor data and the normalization of its sensor data that includes ambient light, reflections, and normals, it does not explicitly state the loss function is used for the purposes of normalization/regularization.
Using a loss function for the purposes of normalization is taught in Madison which normalization with a respective loss component. (Madison ¶33 “since shot-to-shot pressure variations are normalized out by dividing Γ.sub.loss by P.sub.b.” Note: Here Madison teaches that normalization can be accomplished with a respective loss component, in this case by using a loss parameter in division.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Malik with Madison in which the normal, reflectance, and ambient light are normalized with a respective loss component.
There are several reasons that would motivate on to do so, one may be to achieve more accurate and usable data that is less skewed by errors. Normalization in machine learning already seeks to remove issues present in data sets like reducing sensitivity and overfitting, the idea to further improve normalization of Malik’s normal, reflectance, and ambient light by performing it while considering known inaccuracies by considering loss would have been obvious.
Claims 6, 7, 14, 15, are rejected under 35 U.S.C. 103 as being unpatentable over Malik (Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction) in view of Senegas (WO 2018069419 A1) and further in view of Streeter (US 20110144723 A1).
Regarding claim 6, 14, dependent on 1, 9,
Malik teaches:
The computing system of claim 1,
Streeter teaches:
wherein the illuminator includes a plurality of vertical-cavity surface-emitting laser (VCSEL) modules for illuminating the scene (Streeter ¶124 “In certain embodiments, the light source 40 comprises one or more laser diodes, which each provide coherent light.” ¶129 “In certain embodiments, … the light source 40 includes at least one vertical cavity surface-emitting laser (VCSEL) diode.” Note: Streeter teaches the use of a VCSEL as a light source, or illuminator, and that multiple lasers can be employed for illumination.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Malik with Streeter where the laser used for illumination taught by Malik is a VCSEL as described in Streeter.
There are several reasons that would motivate one to use a VCSEL specifically for illumination in Malik’s context, one of which is the energy efficiency and high beam quality offered by VCSELs, things relevant for Malik which utilizes high precision and speed sensors.
Regarding claim 7, 15, dependent on 6, 14,
Malik teaches,
The computing system of claim 6,
Streeter teaches.
wherein the light pulse is a laser pulse with a duration of 240-370 nanoseconds and a wavelength of 808 nm. (Streeter ¶129 “In another embodiment, the light source 40 comprises a laser source having a wavelength of about 808 nanometers. In still other embodiments, the light source 40 includes at least one vertical cavity surface-emitting laser (VCSEL) diode.” ¶242 “If the light is pulsed, the pulses range, in some embodiments from at least about 10 nanoseconds long to about 50 milliseconds long, including about 10-100 ns, 100-500 ns,” Note: Streeter teaches that the pulsed illuminator, which can be a laser with a wavelength of 808nm, can be pulsed at a range of at least 100 – 500 nanoseconds. As the claim’s particular range of 240-370 falls closely within the listed example range Streeter describes without either bound going above or below the range mentioned Streeter teaches the ability for a laser pulse of 808nm to have a range between 240-370 nanoseconds.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Malik with Streeter where the pulsing laser used for illumination in Malik’s system has a wavelength of 808nm and a pulse length of 240-370 nanoseconds.
There are several reasons that would motivate one to do so, as the lasers discussed in Malik are of high precision and used with precise sensors one could achieve reliable results by choosing wavelengths and pulse lengths that have either directly been shown to be used before or fall very closely within ranges shown to be used before. Additionally, at such high precision levels there is an inherently limited number of choices available making it virtually impossible to have a novel wavelength or pulse length at nano level precision.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Malik (Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction) in view of Senegas (WO 2018069419 A1), further in view of Grauer (US 20150296200 A1), further in view of Streeter (US 20110144723 A1), and further in view of Madison (US 20200025639 A1).
As claim 20 introduces no new content and is a simply a direct listing of claims 4, 5, 6, and 7’s content it is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN GREGORY HAKALA whose telephone number is (571)272-7863. The examiner can normally be reached 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALAN GREGORY HAKALA/Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617