DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “control unit” in claims 1-11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Objections
Claims 1-12 are objected to because of the following informalities:
For claim 1, Examiner believes this claim should be amended in the following manner:
A three-dimensional scanner system for reconstructing a three-dimensional shape of a moving object, the three-dimensional scanner system comprising:
a plurality of imaging devices each configured to be focused on the moving object and each comprising a plurality of imaging pixels each of which being capable to detect a light intensity on [[the]] that imaging pixel, and to detect as an event a positive or negative change of the light intensity that is larger than a respective predetermined threshold; and
a control unit configured to control the plurality of imaging devices and to reconstruct a time series of the three-dimensional shape of the moving object based on [[the]] events detected by the imaging devices and on additional information about colors, shape and/or movements of the moving object.
For claim 2, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 1, wherein
the control unit is configured to generate for each imaging device a two-dimensional image showing the events detected by [[the]] a respective imaging device during a predetermined time period, to extract key features from the two-dimensional images captured by all imaging devices during the predetermined time period, and to perform feature matching between the extracted key features, in order to reconstruct the three-dimensional shape the moving object had during the predetermined time period; and
the control unit is configured to use the additional information to support the process of feature matching.
For claim 3, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 1, further comprising
the plurality of imaging devices that are capable to detect light intensities of visible light in different color channels in order to generate the additional information.
For claim 4, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 3, wherein
the additional information is used to add color information on the three-dimensional shape reconstructed from the detected events.
For claim 5, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 3, wherein
the additional information comprises a time series of color frame images of the moving object; and
the control unit is configured to improve the time series of the color frame images by removing blur from the color frame images or by interpolating the color frame images based on the detected events, and to reconstruct the time series of the three-dimensional shape of the moving object from the improved time series of the color frame images.
For claim 6, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 1, wherein
the plurality of imaging pixels are capable to group the detected events according to characteristics of [[the]] received light; and
the additional information comprises information on [[the]] spatial and/or temporal distribution of events generated by light of a given characteristic.
For claim 7, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 6, wherein
the [[same]] plurality of imaging pixels are capable to detect events and to detect light intensities of different color channels; and
the given characteristic is [[the]] a color of the received light.
For claim 8, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 6, wherein
the plurality of imaging pixels are capable to group the detected events according to [[the]] polarization of the received light; and
the given characteristic[[s]] is the polarization of the received light.
For claim 9, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 1, wherein
the control unit is configured to deduce for each imaging device from changes in [[the]] a distribution of detected events over time directions of movements of the moving object; and
the additional information comprises the directions of the movements of the moving object as deduced for at least a part of the plurality of imaging devices.
For claim 10, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 1, wherein
the plurality of imaging devices are capable to capture an optic flow; and the additional information comprises direction, speed, and acceleration of the optic flow.
For claim 11, Examiner believes this claim should be amended in the following manner:
The three-dimensional scanner system according to claim 1, wherein
the additional information comprises a previously generated high resolution model of the three-dimensional shape of the moving object at rest; and
the control unit is configured to reconstruct the three-dimensional shape of the moving object from the detected events with a spatial resolution that is lower than the spatial resolution of the high resolution model, and to fit the high resolution model to the three-dimensional shape[[s]] reconstructed from the detected events in order to increase the spatial resolution of the[[se]] three-dimensional shape[[s]].
For claim 12, Examiner believes this claim should be amended in the following manner:
A method for operating a three-dimensional scanner system for reconstructing a three-dimensional shape of a moving object, the three-dimensional scanner system comprising a plurality of imaging devices each comprising a plurality of imaging pixels each of which being capable to detect a light intensity on [[the]] that imaging pixel, and to detect as an event a positive or negative change of the light intensity that is larger than a respective predetermined threshold, the method comprising:
detecting events with the plurality of imaging devices, while focusing the moving object with the imaging devices; and
reconstructing a time series of the three-dimensional shape of the moving object based on the events detected by the plurality of imaging devices and on additional information about colors, shape and/or movements of the moving object.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Belbachir et al., Event-driven Stereo Vision for Fall Detection, Computer Vision and Pattern Recognition Workshops, 2011 IEEE Computer Society Conference, June 2011, pages 78-83 (hereinafter “Belbachir”) (made of record of the IDS submitted 5/23/2024) in view of Kaufmann et al. (U.S. Patent Application Publication 2020/0273180 A1, hereinafter “Kaufmann”).
For claim 1, Belbachir discloses a three-dimensional scanner system for reconstructing a three-dimensional shape of a moving object (disclosing a system for capturing images to reconstruct a 3D shape of a moving object such as a falling person (pages 78-80/Fig. 4)), the scanner system comprising:
a plurality of imaging devices each configured to be focused on the object and each comprising a plurality of imaging pixels each of which being capable to detect a light intensity on the imaging pixel, and to detect as an event a positive or negative change of the light intensity (disclosing multiple dynamic vision sensors (DVS) as imaging devices configured to be focused on the person where each DVS comprises an array of imaging pixels where each pixel of capable to detect a light intensity for that pixel and to detect events as an increase (positive) or decrease (negative) change of the light intensity (page 79/Fig. 1)); and
a control unit configured to control the plurality of imaging devices and to reconstruct a time series of the three-dimensional shape of the object based on the events detected by the imaging devices and on additional information about colors, shape and/or movements of the object (disclosing a digital signal processor as a control unit to control the DVSs to reconstruct a time series of the 3D shape of the object based on events detected by the DVSs and on additional information of motion as movements of the object (pages 79-82/Figs. 1, 4, 5 and 8)).
Belbachir does not disclose detecting as an event a positive or negative change of the light intensity that is larger than a respective predetermined threshold.
However, these limitations are well-known in the art as disclosed in Kaufmann.
Kaufmann similarly discloses a system and method for generating a 3D reconstruction of an object from events detected by dynamic vision sensors (par. 3, 19, 38 and 67). Kaufmann likewise explains its system detects events based on a positive or negative change in light intensity where the positive or negative change in light intensity is larger than a certain threshold as a predetermined threshold (par. 57 and 61). It follows Belbachir may be accordingly modified with the teachings of Kaufmann to detect its event as a positive or negative change of its light intensity that is larger than a threshold.
A person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention would find it obvious to modify Belbachir with the teachings of Kaufmann. Kaufmann is analogous art in dealing with system and method for generating a 3D reconstruction of an object from events detected by dynamic vision sensors (par. 3, 19, 38 and 67). Kaufmann discloses its use of a threshold is advantageous in detecting events for appropriate reconstruction of an object from the detected events (par. 57 and 61). Consequently, a PHOSITA would incorporate the teachings of Kaufmann into Belbachir for detecting events for appropriate reconstruction of an object from the detected events. Therefore, claim 1 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 12, Belbachir as modified by Kaufmann discloses a method for operating the three-dimensional scanner system of claim 1 to perform the functions of the three-dimensional scanner system of claim 1 (see above as to claim 1).
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Belbachir in view of Kaufmann further in view of Metzler et al. (U.S. Patent Application Publication 2018/0158200 A1, hereinafter “Metzler”).
For claim 2, depending on claim 1, Belbachir as modified by Kaufmann discloses wherein the control unit is configured to generate for each imaging device a two-dimensional image showing the events detected by the respective imaging device during a predetermined time period, in order to reconstruct the three-dimensional shape the object had during the predetermined time period (Belbachir discloses the digital signal processor generates a still image as a 2D image showing the events detected by the DVSs during a time period determined by timestamps to reconstruct the 3D shape of the object during the time period (pages 79-80/Figs. 3 and 4)).
Belbachir as modified by Kaufmann does not disclose extracting key features and performing feature matching between the extracted key features.
However, these limitations are well-known in the art as disclosed in Metzler.
Metzler similarly discloses a system and method for performing a 3D scan with a dynamic vision sensor to generate a corresponding reconstruction (par. 3 and 51). Metzler explains its system extracts 2D features as key features and performs feature matching on the extracted key features to generate the reconstruction (par. 32, 36-37 and 51). It follows Belbachir and Kaufmann may be accordingly modified with the teachings of Metzler to extract key features from its two-dimensional images to perform feature matching between the extracted key features to perform its reconstruction and to use its additional information to support the feature matching.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Belbachir and Kaufmann with the teachings of Metzler. Metzler is analogous art in dealing with system and method for performing a 3D scan with a dynamic vision sensor to generate a corresponding reconstruction (par. 3 and 51). Metzler discloses its use of feature matching is advantageous in processing captured images to generate an appropriate reconstruction (par. 32, 36-37 and 51). Consequently, a PHOSITA would incorporate the teachings of Metzler into Belbachir and Kaufmann for processing captured images to generate an appropriate reconstruction. Therefore, claim 2 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
Claim(s) 3 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Belbachir in view of Kaufmann further in view of Li et al. (U.S. Patent Application Publication 2022/0405553 A1, hereinafter “Li”).
For claim 3, depending on claim 1, Belbachir as modified by Kauffman does not disclose detecting visible light in different color channels.
However, these limitations are well-known in the art as disclosed in Li.
Li similarly discloses a system and method for tracking objects and corresponding motion through dynamic vision sensing (par. 110). Li explains its system may perform the sensing by detecting light in different visible color channels for different wavelength ranges (Fig. 8F; par. 128). It follows Belbachir and Kaufmann may be accordingly modified with the teachings of Li to implement its plurality of imaging device to detect light intensities of visible light in different color channels to generate its additional information.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Belbachir and Kaufmann with the teachings of Li. Li is analogous art in dealing with system and method for tracking objects and corresponding motion through dynamic vision sensing (par. 110). Li discloses its use of different visible color channels is advantageous in detecting light for different wavelength ranges for appropriate sensing and tracking of objects (par. 110 and 128). Consequently, a PHOSITA would incorporate the teachings of Li into Belbachir and Kaufmann for detecting light for different wavelength ranges for appropriate tracking sensing and tracking of objects. Therefore, claim 3 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 4, depending on claim 3, Belbachir as modified by Kauffman and Li discloses wherein additional information is used to add color information on the three-dimensional shape reconstructed from the detected events (Belbachir discloses the 3D shape reconstructed from the detected events is color coded (page 80/Fig. 4); Li similarly discloses a system and method for tracking objects and corresponding motion through dynamic vision sensing (par. 110);Li explains its system may perform the sensing by detecting light in different visible color channels for different wavelength ranges (Fig. 8F; par. 128); and it follows Belbachir and Kaufmann may be accordingly modified with the teachings of Li to implement its plurality of imaging device to detect light intensities of visible light in different color channels to generate its additional information for application as color coding to its 3D shape reconstructed from its detected events to improve visibility of its 3D shape).
Claim(s) 5-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Belbachir in view of Kaufmann further in view of Li further in view of Daniilidis et al. (U.S. Patent Application Publication 2020/0265590 A1, hereinafter “Daniilidis”).
For claim 5, depending on claim 3, Belbachir as modified by Kauffman and Li discloses wherein the additional information comprises a time series of color frame images of the moving object (Belbachir discloses its additional information may comprise a time series of time-stamped still images as color frame images of its moving object (page 79/Fig. 3)).
Belbachir as modified by Kauffman and Li does not disclose improving images
by removing blur from the images or by interpolating the images.
However, these limitations are well-known in the art as disclosed in Daniilidis.
Daniilidis similarly discloses a system and method for sensing motion of an object with an event-based camera implemented with a dynamic vision sensor (par. 7 and 109). Daniilidis explains its system improves event camera images by removing blur from the event camera images to appropriately perform event reconstruction (par 12 and 52). It follows Belbachir, Kaufmann and Li may be accordingly modified with the teachings of Daniilidis to improve its color frame images by removing blur and to reconstruct its 3D shape of its object from the improved color frame images.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Belbachir, Kaufmann and Li with the teachings of Daniilidis. Daniilidis is analogous art in dealing with system and method for sensing motion of an object with an event-based camera implemented with a dynamic vision sensor (par. 7 and 109). Daniilidis discloses its use of deblurring is advantageous in correcting images to facilitate appropriate event reconstruction (par 12 and 52). Consequently, a PHOSITA would incorporate the teachings of Daniilidis into Belbachir, Kaufmann and Li for correcting images to facilitate appropriate event reconstruction. Therefore, claim 5 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 6, depending on claim 1, Belbachir as modified by Kauffman, Li and Daniilidis discloses wherein the imaging pixels are capable to group the detected events according to characteristics of the received light; and the additional information comprises information on the spatial and/or temporal distribution of events generated by light of a given characteristic (Daniilidis similarly discloses a system and method for sensing motion of an object with an event-based camera implemented with a dynamic vision sensor (par. 7 and 109); Daniilidis explains it is known to group events according to characteristics of received light such as polarity and information on the spatiotemporal distribution of the events generated by light of a given characteristic as the polarity (par. 30, 33, 47 and 59); and it follows Belbachir, Kaufmann and Li may be accordingly modified with the teachings of Daniilidis to implement its plurality of imaging pixels to group its detected events according to characteristics of received light and to implement its additional information on spatiotemporal distribution of events generated by light of a given characteristic to perform appropriate reconstruction).
For claim 7, depending on claim 6, Belbachir as modified by Kauffman, Li and Daniilidis discloses wherein the same imaging pixels are capable to detect events and to detect light intensities of different color channels; and the given characteristic is the color of the received light (Li similarly discloses a system and method for tracking objects and corresponding motion through dynamic vision sensing (par. 110). Li explains its system may perform the sensing by detecting light in different visible color channels for different wavelength ranges (Fig. 8F; par. 128). It follows Belbachir and Kaufmann may be accordingly modified with the teachings of Li to implement its plurality of imaging device to detect its events and to detect light intensities of visible light in different color channels to generate its additional information on a given characteristic of a color of received light to perform appropriate sensing and reconstruction).
For claim 8, depending on claim 6, Belbachir as modified by Kauffman, Li and Daniilidis discloses wherein the imaging pixels are capable to group the detected events according to the polarization of the received light; and the given characteristics is the polarization of the received light (Daniilidis similarly discloses a system and method for sensing motion of an object with an event-based camera implemented with a dynamic vision sensor (par. 7 and 109); Daniilidis explains it is known to group events according to characteristics of received light such as polarity for polarization of the received light; and it follows Belbachir, Kaufmann and Li may be accordingly modified with the teachings of Daniilidis to implement its plurality of imaging pixels to group its detected events according to polarization of received light and to implement its additional information on polarization of received light to perform appropriate reconstruction).
Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Belbachir in view of Kaufmann further in view of Li further in view of Daniilidis further in view of Wu (U.S. Patent Application Publication 2017/0180729 A1).
For claim 9, depending on claim 1, Belbachir as modified by Kauffman, Li and Daniilidis discloses the control unit is configured to deduce for each imaging device from changes in the distribution of detected events over time directions (Daniilidis similarly discloses a system and method for sensing motion of an object with an event-based camera implemented with a dynamic vision sensor (par. 7 and 109); Daniilidis explains its system performs estimation or deduction of changes in a distribution of detected events as an optical flow and determines directions for the optical flow to perform reconstruction (par. 44 and 109); and it follows Belbachir, Kaufmann and Li may be accordingly modified with the teachings of Daniilidis to configure its control unit to deduce for its plurality of image devices from change in a distribution of its detected events over time directions of an optical flow to perform appropriate reconstruction).
Belbachir as modified by Kauffman, Li and Daniilidis does not specifically disclose deducing directions of movements of an object.
However, these limitations are well-known in the art as disclosed in Wu.
Wu similarly discloses a system and method for using vision sensors for tracking a moving object with an optical flow field (par. 56 and 72). Wu explains the optical flow field is used to estimate or deduce directions of movements of the object (par. 72 and 85). It follows Belbachir, Kaufmann, Li and Daniilidis may be accordingly modified with the teachings of Wu to configure its control unit to deduce directions of movements of its object and to implement its additional information to include the directions of movements of its objects.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Belbachir, Kaufmann, Li and Daniilidis with the teachings of Wu. Wu is analogous art in dealing with system and method for tracking a moving object with an optical flow field (par. 56 and 72). Wu discloses its use of an optical flow field is advantageous for appropriately determining directions of movement of objects to appropriately track the objects over a sequence of captured images (par. 72). Consequently, a PHOSITA would incorporate the teachings of Wu into Belbachir, Kaufmann, Li and Daniilidis for appropriately determining directions of movement of objects to appropriately track the objects over a sequence of captured images. Therefore, claim 9 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 10, depending on claim 1, Belbachir as modified by Kauffman, Li, Daniilidis and Wu discloses wherein the imaging devices are capable to capture an optic flow; and the additional information comprises direction, speed, and acceleration of the optic flow (Wu similarly discloses a system and method for using vision sensors for tracking a moving object with an optical flow field (par. 56 and 72). Wu explains its system determines direction, speed and acceleration of the optic flow field to determine the direction, speed and acceleration of a corresponding object in captured images (par. 72); and it follows Belbachir, Kaufmann, Li and Daniilidis may be accordingly modified with the teachings of Wu to configure its plurality of image devices to capture an optic flow and to implement its additional information to include direction, speed and acceleration of the optic flow for tracking its moving object for appropriate reconstruction).
Allowable Subject Matter
Claim 11 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES TSENG/ Primary Examiner, Art Unit 2613