DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered.
Response to Amendment
This is in response to applicant’s amendment/response filed on 11/21/2025, which has been entered and made of record. Claims 1, 2, 4, 6, 12, 16-19 are amended. Claims 1-20 are pending in the application.
Response to Arguments
Applicant's arguments filed 11/21/2025 have been fully considered but they are not persuasive. Applicant primarily argues that Srinivsan just generally discusses indirect lighting during simulated training of a model, but concludes that computing indirect illumination components through integrals cannot be relied upon to teach or suggest, “estimating, using indirect light estimation model, indirect light information from the global geometry feature.” Applicant further continues to argues that even if indirect illumination is taught therein, that illumination does not pertain to global geometry feature to estimate indirect light information (see applicant’s arguments of 11/21/2025, pages 8-9).
Examiner does not agree with Applicant’s arguments and conclusions drawn therefrom. Srinivasan’s rendering algorithm is depicted in fig. 3, where both direct (c) and indirect (d) illumination are used in integral to find the pixels in rendered image. Fig. 4 also shows the geometry of an indirect illumination path from camera to light source, and equations 12-14 shows volume rendering equations that uses both direct and indirect illumination components [see section 3.3]. In section 3.4 Rendering, Srinivasan further discloses that “We shade each point along the ray with indirect illumination by estimating the integral in Equation 13.” Srinivasan also clearly uses indirect lighting under global illumination, since he discloses, “In this work, we present a method to train a NeRF-like model that can simulate realistic environment lighting and global illumination.” [see page 7496, Col. 1, last para]. Furthermore, the integral in eqns. 12-14 are operated over sphere S, around each point, indicating lighting contributions from all around the point, or in other words taking global illumination in account [see section 3.4, rendering, and section 3.2, page 7498, Col. 2, ¶001].
Finally, Applicant argues that since Dave excludes the use of indirect illumination within “PANDORA Pipeine”, therefore the proposed combination of Dave and Srinivasan does not teach the limitation of, “estimating, using indirect light estimation model, indirect light information from the global geometry feature”
Examiner does not agree with Applicant’s arguments and conclusions drawn therefrom. In the Assumptions section, Dave discloses “We focus on direct illumination light paths, and indirect illumination and self-occlusions are currently neglected” – which should be understood as that indirect illumination can never be used, since it was neglected by Dave. Here, neglecting means, Dave did not account for indirect illumination in scene rendering, possibly to simplify the process or simulation. However, such assumptions should not be used to conclude that indirect illumination cannot be used at all. Thus, Examiner contends that the combination of Dave and Srinivasan is proper. For details see the rejection below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11, 14, 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Dave et al. (Dave, Akshat, Yongyi Zhao, and Ashok Veeraraghavan. "Pandora: Polarization-aided neural decomposition of radiance." European conference on computer vision. Cham: Springer Nature Switzerland, 2022; hereinafter Dave) in view of Srinivasan et al. (Srinivasan, Pratul P., et al. "Nerv: Neural reflectance and visibility fields for relighting and view synthesis." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.; hereinafter Srinivasan).
Regarding claim 1, Dave discloses an electronic device (Pandora solution in Computer Graphics system, abstract) comprising:
one or more processors (processor is inherent in computer graphics system, abstract) configured to:
extract, using an implicit neural representation (INR) model, a global geometry feature and information indicating whether a point is on a surface of an object from a viewpoint and a view direction corresponding to an image pixel corresponding to a two-dimensional (2D) scene at the viewpoint within a field of view (FOV) (abstract: “We propose PANDORA, a polarimetric inverse rendering approach based on implicit neural representations"; Sec. 4.2: "For rendering surfaces, the ideal opacity should have a sharp discontinuity at the ray surface intersection [ ... ] SDFNet, that takes as input the position x and outputs the signed distance field at that position x along with geometry feature vectors". The skilled person understands that the signed distance field is an implicit representation of a geometry and that evaluating the ray intersection at the (pixel) position x already implies that a viewpoint and a viewing direction are utilized. “Thus for any pixel in the captured images, the camera position o and camera ray direction d are known.” – Section 4.1, input, including 4 MP image sensor);
determine an object surface position corresponding to the viewpoint and the view direction and normal information of the object surface position based on the information indicating whether the point is on the surface (ibid: "…takes as input the position x and outputs the signed distance field at that position x [ ... ] The SDF model also provides surface normal”);
estimate, using an albedo estimation model, albedo information independent of the view direction from the global geometry feature, the object surface position, and the normal information (Sec. 3.4: "Diffuse radiance is invariant of the viewing direction and only depends on the spatial location. The geometry features from SDFNet and the position are passed through another coordinate based MLP, denoted as Diffuse Net, to output the diffuse radiance". Diffuse radiance is trivially synonymous with the claimed albedo information, also see Applicant’s definition of Albeo in ¶0049, ¶0079, ¶0097 and ¶0101 of the present application [refer to PGPUB # US 20240177408 A1]);
estimate, using a specular estimation model, specular information dependent on the view direction from the global geometry feature, the object surface position, the normal information, and the view direction (ibid.: "Unlike the diffuse radiance, the specular radiance depends on the viewing angle d and the object roughness [ ... ] we instead use an IDE-based neural network to output the specular radiance, LO from the estimated roughness, a and surface normal, n” ); and determining a pixel value of the image pixel based on scene component information including “For example, we assign pink albedo to the object by removing the G component of radiance without altering the specularities in (Fig. 1(c) top left).”, section 5.3 Additional Applications. Also see fig. 2, left, for Diffuse Albedo component for camera pixel value calculation).
Although likely implicit for rendering image (abstract), Dave is not found disclosing expressly the limitation of, estimate, using indirect light estimation model, indirect light information from the global geometry feature, and determine a pixel value of the image pixel based on scene component information including visibility information together with the albedo information and the specular information, the visibility information including the indirect light information.
However, Srinivasan discloses a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions (See abstract). Srinivasan further discloses that rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination (See fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Srinivasan’s rendering algorithm is depicted in fig. 3, where both direct (c) and indirect (d) illumination are used in integral to find the pixels in rendered image. Fig. 4 also shows the geometry of an indirect illumination path from camera to light source, and equations 12-14 shows volume rendering equations that uses both direct and indirect illumination components (see section 3.3). In section 3.4 Rendering, Srinivasan further discloses that “We shade each point along the ray with indirect illumination by estimating the integral in Equation 13.” Srinivasan also clearly uses indirect lighting under global illumination, since he discloses, “In this work, we present a method to train a NeRF-like model that can simulate realistic environment lighting and global illumination.” (see page 7496, Col. 1, last para). Furthermore, the integral in eqns. 12-14 are operated over sphere S, around each point, indicating lighting contributions from all around the point, or in other words taking global illumination in account (see section 3.4, rendering, and section 3.2, page 7498, Col. 2, ¶001). For the limitation, the visibility information including the indirect light information is merely a grouping convention or definition, that does not carry any patentable weight.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Dave with the teaching of Srinivasan to render the output in terms of 2D image pixel that determines pixel value of the image pixel based on scene component information including visibility information together with the albedo information and the specular information as well as taking both direct and indirect illuminations in consideration to obtain, estimate, using indirect light estimation model, indirect light information from the global geometry feature, and determine a pixel value of the image pixel based on scene component information including visibility information together with the albedo information and the specular information, the visibility information including the indirect light information, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, including Srinivasan’s method in image rendering of Dave, enhances the scope of usage of the model that enables rendering under direct and indirect lighting condition which was initially neglected in Dave’s method (see Assumptions in page 540 of Dave).
Regarding claim 2, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein the one or more processors are configured to determine pixel value of the image pixel further based on direct light information together with the albedo information and the specular information (Dave: Our key insight is that polarization is a useful cue for neural inverse rendering as polarization strongly depends on surface normals and is distinct for diffuse and specular reflectance … PANDORA jointly extracts the object’s 3D geometry, separates the outgoing radiance into diffuse and specular and estimates the incident illumination, abstract. Also see scene opacity in section 4.2, ¶1. Srinivasan, section 3.1 also discusses pixel rendering
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding claim 3, Dave in view of Srinivasan discloses the electronic device of claim 2, wherein the one or more processors are configured to estimate, using a machine learning model, the scene component information from the global geometry feature, the object surface position, and the normal information, for ray directions departing from the object surface position (PANDORA: Polarization-Aided Neural Decomposition of Radiance, abstract. Our key insight is that polarization is a useful cue for neural inverse rendering as polarization strongly depends on surface normals and is distinct for diffuse and specular reflectance …. PANDORA jointly extracts the object’s 3D geometry, separates the outgoing radiance into diffuse and specular and estimates the incident illumination. We show that PANDORA outperforms state-of-the-art radiance decomposition techniques. PANDORA outputs clean surface reconstructions free from texture artefacts, models strong specularities accurately and estimates illumination under practical unstructured scenarios, abstract,
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding claim 4, Dave in view of Srinivasan discloses the electronic device of claim 2, wherein, for the determining of the pixel value, the one or more processors are configured to individually estimate the visibility information using a visibility estimation model (section 4.2, opacity in last ¶. Srinivasan, section 3.1 also discusses pixel rendering), and the direct light information using a direct light estimation model from the global geometry feature (We focus on direct illumination light paths – Assumptions before section 2. The specular component can be modelled as a direct reflection from specular microfacets on the surface, section 3.1 Polarimetric BRDF (pBRDF) Model), the object surface position and the normal information (Surface normal, fig. 2), for ray directions departing from the object surface position (Diffuse albedo, fig. 2).
Regarding claim 5, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein the one or more processors are configured to:
estimate scene component information including albedo information and specular information for view directions corresponding to image pixels corresponding to the 2D scene from the viewpoint ("Unlike the diffuse radiance, the specular radiance depends on the viewing angle d and the object roughness [ ... ] we instead use an IDE-based neural network to output the specular radiance, LO from the estimated roughness, a and surface normal, n”, section 4.3 Specular Radiance Estimation. Srinivasan, section 3.1 also discusses pixel rendering
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering); and
generate a 2D image by determining pixel values of the image pixels using scene component information estimated for the image pixels (We propose a framework to render polarization images from implicit representations of the object geometry, surface reflectance and illumination – Contributions, page 540. Srinivasan, section 3.1 also discusses pixel rendering).
Regarding claim 6, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein the one or more processors are configured to:
adjust any one or any combination of any two or more of scene component of visibility information, the indirect light information, direct light information, the albedo information, and the specular information, based on a user input (Our key insight is that polarization is a useful cue for neural inverse rendering as polarization strongly depends on surface normals and is distinct for diffuse and specular reflectance … PANDORA jointly extracts the object’s 3D geometry, separates the outgoing radiance into diffuse and specular and estimates the incident illumination, abstract. Also see scene opacity in section 4.2, ¶1. We propose a differentiable rendering framework that takes as input the surface, reflectance parameters and illumination and renders polarization images under novel views, Section 1, Our Approach.
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 RenderingSrinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering); and
determine a pixel value of a pixel of the 2D image corresponding to the viewpoint and the view direction based on the adjusted scene component and an estimated scene component (PANDORA: Polarization-Aided Neural Decomposition of Radiance, abstract. Our key insight is that polarization is a useful cue for neural inverse rendering as polarization strongly depends on surface normals and is distinct for diffuse and specular reflectance …. PANDORA jointly extracts the object’s 3D geometry, separates the outgoing radiance into diffuse and specular and estimates the incident illumination. We show that PANDORA outperforms state-of-the-art radiance decomposition techniques. PANDORA outputs clean surface reconstructions free from texture artefacts, models strong specularities accurately and estimates illumination under practical unstructured scenarios, abstract. Srinivasan, section 3.1 also discusses pixel rendering
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding claim 7, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein the one or more processors are configured to obtain the object surface position by repeatedly performing a ray marching based on the information indicating whether the point is on the surface, in the view direction from the viewpoint (Thus for any pixel in the captured images, the camera position o and camera ray direction d are known, section 4.1. Also see section 4.2 - Implicit Surface Estimation – Similar to VolSDF, our pipeline comprises of an MLP, which we term SDFNet, that takes as input the position x and outputs the signed distance field at that position x along with geometry feature vectors f useful for radiance estimation.
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding claim 8, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein the one or more processors are configured to:
determine a point spaced apart from the viewpoint in the view direction (section 3.1, Stokes Vector); and
generate, using the INR model, a global geometry feature corresponding to the determined point and distance information on a distance between the determined point and an object surface (see section 4.2 - Implicit Surface Estimation. Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding claim 9, Dave in view of Srinivasan discloses the electronic device of claim 8, wherein the one or more processors are configured to determine normal information of the determined point by analyzing the viewpoint, the view direction, and information indicating whether the determined point is on a surface (see abstract, see fig. 3, section 4.2…etc. Fig. 4: "PANDORA outputs sharp specularities, cleaner surface and more accurate illumination
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding claim 10, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein
The INR model is trained based on an output of a neural renderer (title, abstract
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering), and
the neural renderer is configured to estimate a pixel value of an image pixel from the global geometry feature, the object surface position, the normal information, and the view direction (see abstract,
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding claim 11, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein the one or more processors are configured to estimate visibility information using a visibility estimation model trained using visible distances between the object surface position and arrival points determined based on ray marching for ray directions departing from the object surface position (eqn. 5, T(t) is the probability that the ray travels to t without getting occluded. See section 4.2 Implicit Surface Estimation).
Regarding claim 14, Dave in view of Srinivasan discloses the electronic device of claim 1, wherein the albedo estimation model and the specular estimation model are trained based on an objective function value between a ground truth (GT) 2D image and a temporary 2D image reconstructed based on albedo information output from the albedo estimation model, specular information output from the specular estimation model, and other scene component information output from a machine learning model (Dave: Fig. 4, Section 5.1.2, GT image used. Also see section 5.2, 3d Reconstruction; B Implementation details ¶1-2. Page 21, Effect of roughness on illumination estimation, fig. 15
Srinivasan: Rendered image take consideration of light visibility (b), direct and indirect illumination (c, d), BRDF (e), Normals (f), albedo (g), specular roughness (h), shadow map (i); Srinivasan also show the same rendered viewpoint if it were lit by only (j) direct and (k) indirect illumination, see fig. 3(a), section 3.1, eqns 1-15, section 3.4 Rendering).
Regarding method claim(s) 16-19 although wording is different, the material is considered substantively equivalent to the device claim(s) 1-3, 6 respectively as described above.
Claims 12, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Dave in view of Srinivasan and further in view of Zhang et al. (Zhang, Yuanqing, et al. "Modeling indirect illumination for inverse rendering." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. Hereinafter Zhang).
Regarding claim 12, Dave in view of Srinivasan discloses the electronic device of claim 1.
Dave ignores Indirect illumination estimation (see Assumptions). Therefore, Dave in view of Srinivasan is not found disclosing expressly the limitation of, wherein the indirect light estimation model trained using color information of arrival points of rays departing from the object surface position viewed from the object surface position.
However, Zhang discloses modeling Indirect Illumination for Inverse Rendering using neural representation to estimate RGB color information or rays of arrival points of rays departing from the object surface position viewed from the object surface position (abstract, section 3.3, 3.4, 3.6. Specifically, we first learn the geometry and outgoing radiance field of the object, both represented as MLPs, from the input images using the existing method [27]. Then, the learned radiance field serves as the ground-truth incoming illumination of its reachable surface points to train the indirect illumination MLP. Finally, the learned indirect illumination is plugged into the rendering equation and fixed during the optimization of SVBRDF and environmental light. In this way, the indirect illumination can be directly queried when optimizing the other unknowns without the need of recursive path tracing, making the inverse rendering problem better constrained and more efficient to solve. Furthermore, to reduce the ambiguity of disentangling BRDF and incident light, we introduce a prior that a real-world object should consist of limited types of materials. This prior is imposed by representing S, Abstract).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Dave in view of Srinivasan to include indirect illumination as well for surface rendering of Zhang, to obtain, wherein the one or more processors are configured to estimate indirect light information using an indirect light estimation model trained using color information of arrival points of rays departing from the object surface position viewed from the object surface position, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system.
Regarding claim 13, Dave in view of Srinivasan discloses the electronic device of claim 12, wherein the color information of the arrival points for training of the indirect light estimation model is estimated using the INR model and a neural renderer which are completely trained (Zhang, abstract. Sections 3.3, 3.6).
Claims 20 is rejected under 35 U.S.C. 103 as being unpatentable over Dave in view of Srinivasan and further in view of Muller et al. (US 20220284658 A1, hereinafter).
Regarding claim 20, Dave in view of Srinivasan discloses the method of claim 16 (see claim 16 rejection above). Dave in view of Srinivasan is not found disclosing a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors to perform the method.
However, Muller discloses that a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors to perform a method (¶0152-0154, claim 24 and dependents).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Dave in view of Srinivasan to implement the method from a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, and configure the one or more processors to perform the method, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious.
Allowable Subject Matter
Claim 15 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 15, prior arts of record taken alone or in combination fails to reasonably disclose or suggest,
the electronic device of claim 14, wherein the temporary 2D image is reconstructed using an approximation that is based on a split of a rendering operation that determines an image pixel value based on scene component information into a reflection component and an illumination component.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NURUN FLORA whose telephone number is (571)272-5742. The examiner can normally be reached M-F 9:30 am -5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NURUN FLORA/Primary Examiner, Art Unit 2619