DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is made in response to applicant’s preliminary amendment filed 03/20/2025. Claims 15-28 are currently pending in the application. An action follows below:
Claim Objections
Claim 18 is objected to because of the following informalities: “discrete reference points can be” in line 2 should be changed to -- the at least two discrete reference points are -- in order to render antecedent basis for “discrete reference points” in claim 17. Appropriate correction is required.
Claim 22 is objected to because of the following informalities: “a further device” in line 2 should be changed to -- the further device -- in order to render antecedent basis for this limitation in the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 18-20 and 22-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
As per claim 18, this claim recites a limitation, “wherein the device is configured such that discrete reference points can be illuminated according to at least one first illumination pattern and at least one second illumination pattern, and it is possible to switch between the first illumination pattern and the second illumination pattern.” Since it is unclear whether “it” is referred to “the device,” “discrete reference points,” or other, it is considered that the invention is not clearly defined.
As per claims 19-20, these claims are directed to a single/alone device, but recite limitations, “the further device including smart glasses, wherein the data includes information about at least one reference area in at least a first area of a face of the further user, … a virtual target object including an avatar that represents the further user …, and wherein the device is configured to provide a facial expression of the virtual target object based on the data of the further device” in claim 19 and “wherein the device is configured such that providing the facial expression of the virtual target object includes: modulating at least one spline in a first area of the virtual target object based on the information about at least one reference area of the first area” in claim 20, which are associated with and require the features/elements of the further device. In other words, since it is unclear whether these claims are directed to a single/alone device or a system comprising a device and a further device, it is considered that the invention is not clearly defined.
In addition to claims 19-20, these claims recite limitations, “wherein the device is configured to provide a facial expression of the virtual target object based on the data of the further device” in last 2 lines of claim 19 and “wherein the device is configured such that providing the facial expression of the virtual target object includes: modulating at least one spline in a first area of the virtual target object based on the information about at least one reference area of the first area” in claim 20. Since it is unclear whether “the virtual target object,” “a first area,” and “reference area” of the above underlined limitations are of the features/elements associated with the device, as recited in claim 15, or of the features/elements associated with the further device, as recited in claim 19, it is considered that the invention is not clearly defined.
It is suggested that at least the above-discussed names/terms associated with the further device are assigned different names/terms associated with the device, in order to clarify the claimed features and to avoid unnecessary 112 issues.
As per claims 22-24, these claims are directed to a method of operating a single/alone device, but recite limitations, which are associated with and require the features/elements of the further device. See the discussion in the rejection of claims 19-20 above for similar limitations. In other words, since it is unclear whether these claims are directed to a single/alone device or a system comprising a device and a further device, it is considered that the invention is not clearly defined. Further, see the discussion in the additional rejection of claims 19-20 above for similar limitations.
Notice to Applicant(s)
Examiner notes that the specification is not the measure of invention. Therefore, limitations contained therein can’t be read into the claims for the purpose of avoiding the prior art. See In re Sporck, 55 CCPA 743, 386 F.2d 924, 155 USPQ 687 (1968).
Further, the names/ terms of the features/elements used in the pending application or pending claims may be different from the names/terms of the matching features/ elements of the prior arts; however, the matching features/ elements of the prior arts contain all characteristics/ functions of the features/elements DEFINED by the pending claims.
Note that in order to avoid confusion, the below citations in the below rejection(s) are mere one or more places in the reference to disclose the "claimed" limitation(s) and/or are directed to one or more of embodiments disclosed by the cited reference(s). In other words, the “claimed” features/limitations may be read in other places in the reference or other embodiments of the reference. In order to better understand how the claimed limitations are taught by the reference(s), a review of the entire reference(s) is suggested by the examiner. Applicant is reminded a prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention as not all relevant paragraphs may have been cited in the rejection. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984).
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
FIRST SET OF REJECTIONS:
Claims 15, 16, 19, 21, 22 and 25-28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zimmermann et al. (US 2021/0312684 A1; hereinafter Zimmermann.)
As per claim 15, Zimmermann discloses a device including smart glasses, which, when worn by a user of the device as intended, is configured to be worn on a head of the user (see at least Figs. 2-3, reference sign 200; ¶ 59 "The wearable system 200 includes a display 220"), in particular on the head of the user,)
wherein the device comprises: at least one laser feedback interferometer (LFI) sensor with at least one laser light source including a laser diode, wherein the LFI sensor is disposed on the device and is configured to emit laser radiation into a reference area in a first area of a face outside eyes of the user of the device and to capture a reflected portion of laser radiation (see at least Fig. 3 and the corresponding description, specifically at least ¶¶ 67, 68, 72, 73, disclosing at least one laser feedback interferometer (LFI) sensor [318, 324; mirror and optics at ¶ 68] with at least one laser light source [318] including a laser diode, wherein the LFI sensor is disposed on the device and is configured to emit laser radiation [338] into an area including a reference area in a first area of a face outside eyes of the user of the device; also see ¶ 201 and ¶ 219 for a reference area including an area between the eyes and to capture a reflected portion of laser radiation with the inward-facing imaging system capturing;)
wherein the device is configured to use the reflected portion of the laser radiation to derive information about the reference area (see at least ¶¶ 68, 72, 73, 201, 210-220, discussing the device configured to use the inward-facing imaging system capturing the reflected portion of the laser radiation and algorithm to derive information about the reference area;) and
wherein the device is configured to provide the information about the reference area for the purpose of displaying a virtual target object including an avatar that represents the user of the device to a further device (see Figs. 9A-12 and the corresponding description, at least Figs. 9-10, ¶¶ 133, 134, 140, 141 and 145, disclosing the device configured to provide the information about the reference area around an eye of the avatar 1000 for the purpose of displaying a virtual target object including an avatar that represents the user of the device to a further device.)
As per claim 16, Zimmermann discloses the LFI sensor including at least one optical element which is configured to expand a laser beam emitted by the laser light source at least along a line (see the discussion in the rejection of claim 15; or see at least ¶¶ 67-68, disclosing the LFI sensor including mirrors and optics configured to expand a laser beam emitted by the laser light source at least along a line.)
As per claim 19, Zimmermann discloses the device configured to receive data from a further device of a further user, the further device including smart glasses, wherein the data includes information about at least one reference area in at least a first area of a face of the further user, and wherein the device is configured to display a virtual target object including an avatar that represents the further user, based on the data of the further device, and wherein the device is configured to provide a facial expression of the virtual target object based on the data of the further device (see at least Figs. 9-12; ¶¶ 133, 134, 140, 141; ¶ 133 “… schematically illustrates an overall system view depicting multiple user devices interacting with each 900 other. The computing environment includes user 930a, 930b, 930c.930a, 930b, devices. The user devices and 930c can communicate with each other through a network 990 …”; ¶ 134 “… information about a specific user's physical and/or virtual worlds …”; ¶ 140 “The wearable devices 902 and 904 can also track the users' eye movements or gaze based on data acquired by the inward-facing imaging system 462 … reflected images of the user to observe the user's facial expressions or other body movements …”.)
As per claim 21, Zimmermann discloses an associate method for operating a device (see the discussion in the rejection of claim 15 above.)
As per claim 22, see the discussion in the rejection of claim 19 above.
As per claim 25, Zimmermann discloses: wherein a distance spectrum and/or speed spectrum ascertained based on the reflected portion of the laser radiation captured using the at least one LFI sensor is made available as input data to at least one trained neural network and information about the reference area or information about a spline assigned to the respective reference area is derived from the distance spectrum and/or from the speed spectrum using the trained neural network and the spline of the virtual target object is modulated based on the derived information (see at least Fig. 7 and the corresponding description, specifically ¶ 114 “… the remote processing module 270 can process the audio data from the microphone (or audio data in another stream such as, e.g., a video stream being watched by the user) to identify content of the speech by applying various speech recognition algorithms, such as, e.g., hidden Markov models, dynamic time warping (DTW) based speech recognitions, neural networks, deep learning algorithms such as deep feedforward and recurrent neural networks, end-to-end automatic speech recognitions, machine learning algorithms …”.)
As per claim 26, Zimmermann discloses: the method further comprising a learning phase for adapting the trained neural network to the user of the device, wherein the neural network is adapted to the user using a camera, wherein image data recorded with the camera are used as labeled training data for adapting the neural network (see at least Fig. 7 and the corresponding description, specifically ¶ 114 “… the remote processing module 270 can process the audio data from the microphone (or audio data in another stream such as, e.g., a video stream being watched by the user) to identify content of the speech by applying various speech recognition algorithms, such as, e.g., hidden Markov models, dynamic time warping (DTW) based speech recognitions, neural networks, deep learning algorithms such as deep feed forward and recurrent neural networks, end-to-end automatic speech recognitions, machine learning algorithms …”; see at least ¶¶ 68, 73, disclosing to use a camera recording the image data.)
As per claims 27-28, Zimmermann discloses a communication system and an associate communication method between at least two users via a communication network (see the discussion in the rejection of claims 15 and 19 above.)
Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Zimmermann in view of Bikumandla et al. (US 2021/0223856 A1; hereinafter Bikumandla.)
As per claims 17-18, Zimmermann discloses the LFI sensor configured such that a laser beam emitted by the laser light source is at the reference area (see at least Fig. 3; ¶ 68, ¶ 72,) but is silent to “wherein the LFI sensor is configured such that a laser beam emitted by the laser light source is split into at least two discrete subbeams, so that at least two discrete reference points within the reference area are illuminated” of claim 17 and “wherein the device is configured such that discrete reference points can be illuminated according to at least one first illumination pattern and at least one second illumination pattern, and it is possible to switch between the first illumination pattern and the second illumination pattern” of claim 18.
However, in the same field of endeavor, Bikumandla discloses a related device (202; at least Fig. 2) comprising a LFI sensor configured such that a laser beam emitted by the laser light source is split into at least two discrete subbeams, so that at least two discrete reference points within the reference area are illuminated (see at least Figs. 2 and 3/ 4; ¶¶ 34-35, disclosing a facing tracking device [300/400] comprising an illuminator 110 configured such that a laser beam emitted by the laser light source is split into, e.g., eight discrete subbeams so that eight discrete reference points within the reference area are illuminated and detected by the respective eight detectors [111a-111h]; ¶ 37 “… the multiple detectors 111a-h may include 2, 3, 4, 5, 6, 7, 8, 9, 10 or more detectors … each of the multiple detectors 111a-h may be configured to receive light reflected from a respective portion of an area where the respective portions do not overlap …”) and the device configured such that discrete reference points can be illuminated according to at least one first illumination pattern and at least one second illumination pattern, and it is possible to switch between the first illumination pattern and the second illumination pattern (see the above discussion; or see at least Figs. 2 and 3/ 4; ¶¶ 34-37,) thereby enhancing a resolution of a respective area (see at least ¶ 37: “… to enhance a resolution of a respective area …”.)
Thus, it would have been obvious to one of ordinary skill in the art at the time before the effective filing date of invention of the pending application to replace the LFT sensor of Zimmermann with the LFT sensor comprising an illuminator and plural detectors, in view of the teaching in the Bikumandla reference, to improve the above modified device of the Zimmermann reference for the predictable result of enhancing a resolution of a respective area.
Claims 20, 23 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Zimmermann in view of Bikumandla et al. (US 2021/0223856 A1; hereinafter Bikumandla.)
As per claims 20 and 23, Zimmermann, as discussed in the rejection of claim 19, discloses the device configured such that providing the facial expression of the virtual target object and, specifically at least at ¶ 170 discloses “… In certain implementations, as the user moves (or the avatar moves) around in the environment, the wearable system can continuously track the user's head pose … dynamically adjust the size of the avatar … thereby allowing both participants (e.g., avatar and its viewer) to communicate eye-to-eye …”, i.e., modulating the first area of the virtual target object based on the information about at least one reference area of the first area. Zimmermann is silent to use a spline technique to modulate at least one spline in a first area of the virtual target object based on the information about at least one reference area of the first area. Official Notice is taken that both the concept and the advantages of using a spline technique for carrying out data processing in an electronic device are well-known and expected in the art. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of invention of the pending application to use the conventional spline technique in the device of Zimmermann as this conventional spline technique is known to carry out data processing. Accordingly, the modified device of Zimmermann using the spline technique obviously renders all limitations of these claims.
As per claim 24, the modified device of Zimmermann using the spline technique obviously renders a respective reference area assigned to or associated with the at least one spline of the virtual target object, and the at least one spline is modulated based on the information about the respective reference area (see the discussion in the rejection of claim 23 above.)
SECOND/ ALTERNATIVE SET OF REJECTIONS:
This alternative set is made with an assumption that the laser light source and the detector integrally formed in a single laser feedback interferometer (LFI) sensor [18], as shown in Fig. 1.
Claims 15-28 are rejected, in the alternative, under 35 U.S.C. 103 as obvious over Zimmermann in view of Bikumandla.
As per claim 15, Zimmermann discloses a device including smart glasses, which, when worn by a user of the device as intended, is configured to be worn on a head of the user (see at least Figs. 2-3, reference sign 200; ¶ 59 "The wearable system 200 includes a display 220"), in particular on the head of the user,)
wherein the device comprises: at least one laser light source [318] including a laser diode disposed on the device and configured to emit laser radiation into a reference area in a first area of a face outside eyes of the user of the device; and at least one detector [324] configured to capture a reflected portion of laser radiation (see at least Fig. 3 and the corresponding description, specifically at least ¶¶ 67, 68, 72, 73, disclosing at least one laser light source [318] including a laser diode configured to emit laser radiation [338] into an area including a reference area in a first area of a face outside eyes of the user of the device; also see ¶ 201 and ¶ 219 for a reference area including an area between the eyes and at least one detector [324] configured to capture a reflected portion of laser radiation with the inward-facing imaging system capturing;)
wherein the device is configured to use the reflected portion of the laser radiation to derive information about the reference area (see at least ¶¶ 68, 72, 73, 201, 210-220, discussing the device configured to use the inward-facing imaging system capturing the reflected portion of the laser radiation and algorithm to derive information about the reference area;) and
wherein the device is configured to provide the information about the reference area for the purpose of displaying a virtual target object including an avatar that represents the user of the device to a further device (see Figs. 9A-12 and the corresponding description, at least Figs. 9-10, ¶¶ 133, 134, 140, 141 and 145, disclosing the device configured to provide the information about the reference area around an eye of the avatar 1000 for the purpose of displaying a virtual target object including an avatar that represents the user of the device to a further device.)
Zimmermann discloses all limitations of this claim except that Zimmermann discloses the laser light source [318] and the detector [324] arranged separately, instead of integrally formed in a single laser feedback interferometer (LFI) sensor [18], as assumed in light of Fig. 1 of this pending application.
However, in the same field of endeavor, Bikumandla discloses a related device (202; at least Fig. 2) comprising at least one laser feedback interferometer (LFI) sensor [240, 245], each LFI sensor disposed on the device (see at least Fig. 2) and comprising a laser light source [110; Fig. 3/ 4] including a laser diode configured to emit laser radiation into a reference area in a first area of a face outside a portion of a face of the user, e.g., the nose of the user of the device (see Fig. 6) and detectors [111a-11h] configured to capture a reflected portion of laser radiation; wherein the device is configured to use the reflected portion of the laser radiation to derive information about the reference area (see at least Fig. 3 or 4; ¶¶ 34-37,) wherein the laser light source and the detectors integrally formed the laser feedback interferometer (LFI) sensor [240/245] (see at least Fig. 3 or 4,) thereby at least enhancing a resolution of a respective area (see at least ¶ 37: “… to enhance a resolution of a respective area …”) and simplifying the manufacture of the laser feedback interferometer (LFI) sensor and easily replacing when necessary, as readily recognized by one of ordinary skill in the art.
Thus, it would have been obvious to one of ordinary skill in the art at the time before the effective filing date of invention of the pending application to replace the light source and the detector of Zimmermann with the LFT sensor comprising a light source and plural detectors, in view of the teaching in the Bikumandla reference, to improve the above modified device of the Zimmermann reference for the predictable result of at least enhancing a resolution of a respective area and simplifying the manufacture of the laser feedback interferometer (LFI) sensor and easily replacing when necessary, as readily recognized by one of ordinary skill in the art. Accordingly, the above modified device of Zimmermann in view of Bikumandla obviously renders all limitations of this claim.
As per claim 16, the above modified device of Zimmermann obviously renders the LFI sensor including at least one optical element which is configured to expand a laser beam emitted by the laser light source at least along a line (see Zimmermann at least ¶¶ 67-68, disclosing mirrors and optics configured to expand a laser beam emitted by the laser light source at least along a line.)
As per claims 17-18, the above modified device of Zimmermann obviously renders: the LFI sensor is configured such that a laser beam emitted by the laser light source is split into at least two discrete subbeams, so that at least two discrete reference points within the reference area are illuminated and the device is configured such that discrete reference points can be illuminated according to at least one first illumination pattern and at least one second illumination pattern, and it is possible to switch between the first illumination pattern and the second illumination pattern (see Bikumandla at least Figs. 2 and 3/ 4; ¶¶ 34-37, disclosing a facing tracking device [300/400] comprising an illuminator 110 configured such that a laser beam emitted by the laser light source is split into, e.g., eight discrete subbeams so that eight discrete reference points within the reference area are illuminated and detected by the respective eight detectors [111a-111h]; ¶ 37 “… the multiple detectors 111a-h may include 2, 3, 4, 5, 6, 7, 8, 9, 10 or more detectors … each of the multiple detectors 111a-h may be configured to receive light reflected from a respective portion of an area where the respective portions do not overlap …”.) Accordingly, the above modified device of Zimmermann in view of Bikumandla obviously renders all limitations of these claims.
As per claim 19, the above modified device of Zimmermann obviously renders the device configured to receive data from a further device of a further user, the further device including smart glasses, wherein the data includes information about at least one reference area in at least a first area of a face of the further user, and wherein the device is configured to display a virtual target object including an avatar that represents the further user, based on the data of the further device, and wherein the device is configured to provide a facial expression of the virtual target object based on the data of the further device (see Zimmermann at least Figs. 9-12; ¶¶ 133, 134, 140, 141; ¶ 133 “… schematically illustrates an overall system view depicting multiple user devices interacting with each 900 other. The computing environment includes user 930a, 930b, 930c.930a, 930b, devices. The user devices and 930c can communicate with each other through a network 990 …”; Zimmermann ¶ 134 “… information about a specific user's physical and/or virtual worlds …”; Zimmermann ¶ 140 “The wearable devices 902 and 904 can also track the users' eye movements or gaze based on data acquired by the inward-facing imaging system 462 … reflected images of the user to observe the user's facial expressions or other body movements …”.)
As per claim 20, Zimmermann, as discussed in the rejection of claim 19, discloses the device configured such that providing the facial expression of the virtual target object and, specifically at least at ¶ 170 discloses “… In certain implementations, as the user moves (or the avatar moves) around in the environment, the wearable system can continuously track the user's head pose … dynamically adjust the size of the avatar … thereby allowing both participants (e.g., avatar and its viewer) to communicate eye-to-eye …”, i.e., modulating the first area of the virtual target object based on the information about at least one reference area of the first area. Zimmermann is silent to use a spline technique to modulate at least one spline in a first area of the virtual target object based on the information about at least one reference area of the first area. Official Notice is taken that both the concept and the advantages of using a spline technique for carrying out data processing in an electronic device are well-known and expected in the art. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of invention of the pending application to use the conventional spline technique in the above modified device of Zimmermann as this conventional spline technique is known to carry out data processing. Accordingly, the modified device of Zimmermann using the spline technique obviously renders all limitations of this claim.
As per claim 21, the above modified device of Zimmermann obviously renders an associate method for operating a device (see the discussion in the rejection of claim 15 above.)
As per claim 22, see the discussion in the rejection of claim 19 above.
As per claim 23, see the discussion in the rejection of claim 20 above.
As per claim 24, the modified device of Zimmermann using the spline technique obviously renders a respective reference area assigned to or associated with the at least one spline of the virtual target object, and the at least one spline is modulated based on the information about the respective reference area (see the discussion in the rejection of claim 23 above.)
As per claim 25, the above modified device of Zimmermann obviously renders: wherein a distance spectrum and/or speed spectrum ascertained based on the reflected portion of the laser radiation captured using the at least one LFI sensor is made available as input data to at least one trained neural network and information about the reference area or information about a spline assigned to the respective reference area is derived from the distance spectrum and/or from the speed spectrum using the trained neural network and the spline of the virtual target object is modulated based on the derived information (see Zimmermann at least Fig. 7 and the corresponding description, specifically Zimmermann ¶ 114 “… the remote processing module 270 can process the audio data from the microphone (or audio data in another stream such as, e.g., a video stream being watched by the user) to identify content of the speech by applying various speech recognition algorithms, such as, e.g., hidden Markov models, dynamic time warping (DTW) based speech recognitions, neural networks, deep learning algorithms such as deep feedforward and recurrent neural networks, end-to-end automatic speech recognitions, machine learning algorithms …”.)
As per claim 26, the above modified device of Zimmermann obviously renders: the method further comprising a learning phase for adapting the trained neural network to the user of the device, wherein the neural network is adapted to the user using a camera, wherein image data recorded with the camera are used as labeled training data for adapting the neural network (see Zimmermann at least Fig. 7 and the corresponding description, specifically Zimmermann ¶ 114 “… the remote processing module 270 can process the audio data from the microphone (or audio data in another stream such as, e.g., a video stream being watched by the user) to identify content of the speech by applying various speech recognition algorithms, such as, e.g., hidden Markov models, dynamic time warping (DTW) based speech recognitions, neural networks, deep learning algorithms such as deep feed forward and recurrent neural networks, end-to-end automatic speech recognitions, machine learning algorithms …”; see Zimmermann at least ¶¶ 68, 73, disclosing to use a camera recording the image data.)
As per claims 27-28, the above modified device of Zimmermann obviously renders a communication system and an associate communication method between at least two users via a communication network (see the discussion in the rejection of claims 15 and 19 above.)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yamazaki et al. (US 2022/0137409 A1) discloses a related communication system, comprising: at least one first device worn by a user and at least one further device worn by a further (see at least Fig. 6A.)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jimmy H Nguyen whose telephone number is (571) 272-7675. The examiner can normally be reached on Monday-Friday 8:30AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae, can be reached at (571) 272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jimmy H Nguyen/
Primary Examiner, Art Unit 2626