DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 11 is objected to because of the following informalities: Claim 11 recites, “The electronic device of claim, . . ..” The Examiner believes that it is a typographical mistake. Appropriate correction is required. The Examiner is treating the limitation as “The electronic device of claim 9, . . ..” for the purposes of art rejection in light of the hierarchy of the first set of claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan et al. (“DualGaze: Addressing the Midas Touch Problem in Gaze Mediated VR Interaction”) in view of Manduchi (US 20210294413 A1).
Regarding Claim 1, Mohan teaches A method of operating an electronic device (“With the increasing acceptance of eye tracking as a viable interaction method for Virtual Reality (VR) headsets, thoughtful gaze interaction methods need to be carefully designed to meet common challenges such as the Midas Touch problem, where users unintentionally select onscreen objects by gazing upon them.” Mohan Abstract.), comprising:
collecting, during a first period of time, location information of a gaze of a user within a first region of a gaze field of view (
PNG
media_image1.png
370
326
media_image1.png
Greyscale
, where
The first region is mapped to
PNG
media_image2.png
54
52
media_image2.png
Greyscale
of DualGaze method in Fig. 1.
“The activation of a target onscreen choice is achieved using a quick successive two-gaze strategy. After the user’s gaze enters the target region (e.g., a ‘clickable’ button), a smaller pop-up box called the confirmation flag would appear right next to the target region.” Mohan section 3. The DualGaze Interaction Method. Here, the system determines the location of the gaze.
A gaze of a user is within the user’s gaze field of view.
The first period of time is mapped to duration when gaze focuses on the target region.);
detecting, during a second period of time subsequent to the first period of time, movement of the gaze of the user to a second region of the gaze field of view (
The second region is mapped to
PNG
media_image3.png
28
32
media_image3.png
Greyscale
of
PNG
media_image4.png
60
76
media_image4.png
Greyscale
based on DualGaze method in Fig. 1.
The second period of time is mapped to duration when gaze focuses on the confirmation region.
“By a second gaze on the confirmation flag, the user can confirm the selection. We design the confirmation flag in the form of a simple box, roughly a quarter of the menu button in size, and coloured distinguishably from the target button.” Mohan section 3.
The second period time (confirmation) is subsequent to the first period of time (gaze on target region) as shown in fig. 1.);
in response to detecting the movement of the gaze of the user to the second region, selecting a activation location of a display of the electronic device (
“By a second gaze on the confirmation flag, the user can confirm the selection. We design the confirmation flag in the form of a simple box, roughly a quarter of the menu button in size, and coloured distinguishably from the target button.” Mohan section 3.
Here, the activation location of the target button/region is affirmatively/formally selected.
PNG
media_image5.png
306
366
media_image5.png
Greyscale
explicitly shows that there are multiple target/button (1-3) region locations to select from.
In the example in fig. 2, the location of target/button 3 is selected by the DualGaze process.),
wherein theactivation location is selected using the location information collected during the first period of time (
Fig. 1:
PNG
media_image6.png
294
150
media_image6.png
Greyscale
, which shows the confirmation is of the location of the target based on past gaze trajectory during the first period of time.); and
Mohan does not explicit disclose the activation location is where a display is activated for displaying a zoom reticle at the display location, wherein the zoom reticle includes a magnified portion of the gaze field of view.
Manduchi teaches the activation location is where a display is activated for displaying a zoom reticle at the display location, wherein the zoom reticle includes a magnified portion of the gaze field of view (
“Block 404 represents connecting a computer to the display. In one example, the computer obtains gaze data from the gaze direction, maps the gaze data onto a desired location on the display corresponding to the gaze direction, provides a center of magnification on the display, and moves the center of magnification onto the desired location so as to magnify the content on the display. . . . determines a magnification area and an unmagnified area on the display so that the gaze point is within the magnified area, and instructs the display to magnify an image on the display in the magnification area so that the user can view the image comprising at least a portion of the content comprising text and/or graphics.” Manduchi ¶ 121.
PNG
media_image7.png
384
570
media_image7.png
Greyscale
).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Manduchi’s zoom reticle with Mohan. One of ordinary skill in the art would be motivated to help a user with vision difficulties. “The present disclosure relates to methods and systems for magnifying text or graphics on a screen to aid reading of the text or graphics.” Manduchi ¶ 2.
After the combination of Mohan and Manduchi, Manduchi’s magnification window/reticle is confirmed with Mohan’s technique to improve reliability for gaze based selections.
Claim 9 is substantially similar to Claim 1. Claim 1’s rejection analyses based on Mohan in view of Manduchi are applied to Claim 9 as well. In addition, Mohan further teaches Claim 9’s limitation:
An electronic device, comprising:
an eye tracker configured to detect a gaze of a user within a gaze field of view (“The VR headset used for the study was FOVE VR developed by a Tokyo-based start-up. FOVE uses an infrared technology for accurate tracking of eye movements with low latency. Embedded sensors within the headset track the users’ pupils, while they interact with the objects on the screen.” Mohan Section 4.1.);
a display (
PNG
media_image8.png
272
582
media_image8.png
Greyscale
);
a processor operably coupled to the eye tracker and the display, the processor configured to: . . . (“Setting up the FOVE headset involves a calibration of an external positional tracking camera, which should always be able to see a full view of the headset, as well as staying connected to the PC through USB wires; see Figure 3 for the physical setup.” Mohan 4.1.).
Regarding Claim 10, The electronic device of claim 9, comprising: a camera having an imaging field of view that at least partially overlaps the gaze field of view (“Setting up the FOVE headset involves a calibration of an external positional tracking camera, which should always be able to see a full view of the headset, as well as staying connected to the PC through USB wires; see Figure 3 for the physical setup.” Mohan 4.1.
The gaze field of view originated from eyes within the headset as shown in fig. 3. The external camera has a “full view of the headset,” where the eye gaze originates. Therefore, the imaging field of view partially overlaps the gaze field of view.).
Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan in view of Manduchi as applied to Claim 1, in further view of Aguilar et al. (“Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people”).
Regarding Claim 2, Mohan in view of Manduchi teaches The method of claim 1.
Mohan in view of Manduchi does not explicitly disclose wherein selecting the display location comprises:
detecting a target object in the gaze field of view using the location information collected during the first period of time; and
selecting the display location using a position of the target object in the gaze field of view.
Aguilar teaches wherein selecting the display location comprises:
detecting a target object in the gaze field of view using the location information collected during the first period of time (
PNG
media_image9.png
874
534
media_image9.png
Greyscale
The target object is mapped to the disclosed “line of text.”
The “line of text” is detected to determine whether a gaze position is “on a line to text,” mapped to the target object
the location information is mapped to “gaze position.”); and
selecting the display location using a position of the target object in the gaze field of view (
Aguilar fig 1 discloses “Definition and Highlighting of ROI Based on (x,y)”, where ROI related to the display location, and (x,y) is the gaze location.
After the combination of Mohan in view of Manduchi and Aguilar, Mohan in view of Manduchi’s zoom reticle corresponds to Aguilar’s ROI.
The display location of ROI/(zoom reticle) is selected using gaze location (x, y) and “line of text” (target object) position based on the gaze location aligned with the position of the “line of text” for the gaze location being “on the line of text.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Aguilar’s teaching of determination of ROI based on target location with Mohan in view of Manduchi. One of ordinary skill in the art would be motivated to initiate a zoom reticle on an area where there are objects that interest the user. “This project investigates the feasibility and the potential benefits for low vision patients of using an integrated system combining gaze-control functions with see-through glasses. The general principle of our system is to use gaze direction to define a region of interest (ROI)±visible on the screenÐthat can subsequently be visually enhanced (see flowchart in Fig 1).” Aguilar section 2.1.
Claim 11 is substantially similar to Claim 2.1 Claim 2’s rejection analyses based on Mohan in view of Manduchi and Aguilar are applied to Claim 11 as well.
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan in view of Manduchi and Aguilar as applied to Claim 2, in further view of Yoon et al. (US 20220066221 A1).
Regarding Claim 3, Mohan in view of Manduchi and Aguilar teaches The method of claim 2.
Mohan in view of Manduchi and Aguilar does not explicitly disclose comprising: detecting motion of the target object in the gaze field of view.
Yoon teaches comprising: detecting motion of the target object in the gaze field of view ( “The method according to an embodiment may further include: detecting movement of an object to which the gaze of the user is directed based on the gaze direction; determining whether the movement of the object exceeds a preset reference value; and reducing a frame rate of the non-dominant display panel corresponding to the non-dominant eye based on determining that the movement exceeds the reference value.” Yoon ¶ 166.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yoon’s motion detection of the target object with Mohan in view of Manduchi and Aguilar. One of ordinary skill in the art would be motivated to protect a user’s vision comfort and/or to save power. “Certain embodiments of the disclosure may provide device and a method that distinguishes between the dominant eye and the non-dominant eye of the user, and change settings of the display panel corresponding to the non-dominant eye to reduce power consumption.” Yoon ¶ 6.
Claim 12 is substantially similar to Claim 3. Claim 3’s rejection analyses based on Mohan in view of Manduchi and Aguilar are also applied to Claim 12.
Claims 6-7 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan in view of Manduchi as applied to Claim 1, in further view of Choi et al. (“Kuiper Belt: Utilizing the “Out-of-natural Angle” Region in the Eye-gaze Interaction for Virtual Reality”).
Regarding Claim 6, Mohan in view of Manduchi teaches The method of claim 1.
Mohan in view of Manduchi does not explicitly disclose wherein:
the display has a display area that is positioned to partially overlap the gaze field of view; and
the display location is selected such that the zoom reticle is positioned within the display area.
Choi teaches wherein:
the display has a display area that is positioned to partially overlap the gaze field of view (
PNG
media_image10.png
240
384
media_image10.png
Greyscale
the display area is mapped to any display area within the “Natural Gaze Angle.”
The gaze field of view is mapped to field of view that include both “Kuiper Belt” and “Natural Gaze Angle.”
“The maximum physical range of horizontal human eye movement is approximately 45◦. However, in a natural gaze shift, the difference in the direction of the gaze relative to the frontal direction of the head rarely exceeds 25◦.We name this region of 25◦−45◦ the ‘Kuiper Belt’ in the eye-gaze interaction.” Choi Abstract.
The gaze field of view and the display area partially overlap, because the gaze field of view contains an area (“Kuiper Belt”) not included in the display area (“Natural Gaze Angle”)); and
the display location is selected such that the zoom reticle is positioned within the display area (
“In all of the aforementioned methods, the target is positioned at an angle of less than 20◦ diference between the eyes and the head (Ishiguro et al. [21] (< 18.5◦); Tonnis et al. [57] (8◦); Lu et al. [30] (< 20◦); Mardanbegi et al. [35] (20◦)). Similarly, due to the size of the target itself, the angle at which the user’s gaze has to move is smaller. Thus, the position of the target in these methods lies within the natural human gaze area, where the angular diference between the eye and head is less than 25◦.” Choi 2.3.
After Mohan in view of Manduchi is combined with Choi, Mohan in view of Manduchi’s zoom reticle will be positioned similar as Choi’s target.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s teaching of displaying target within the natural human gaze area with Mohan in view of Manduchi. One of ordinary skill in the art would be motivated to provide convenience/ease for viewing, because the range of “Natural Gaze Angle” is natural for a user.
Regarding Claim 7, Mohan in view of Manduchi and Choi teaches The method of claim 6, wherein:
the second region of the gaze field of view is positioned outside of the display area (
PNG
media_image10.png
240
384
media_image10.png
Greyscale
After the combination of Mohan in view of Manduchi and Choi is combined with Choi, Mohan inv view of Manduchi and Choi’s confirmation area/flag is placed in Choi’s Kuiper Belt similar to the mail icon in Choi’s fig. 1.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s teaching of displaying target within the natural human gaze area with Mohan in view of Manduchi. One of ordinary skill in the art would be motivated to reduce false input based on gaze gestures. Choi states, “We name this region of 25◦−45◦ the ‘Kuiper Belt’ in the eye-gaze interaction. We try to utilize this region to solve the Midas touch problem to enable a search task while reducing false input in the Virtual Reality environment.” Abstract.
Claims 15-16 are substantially similar to Claims 6-7. Claims 6-7’s rejection analyses based on Mohan in view of Manduchi and Choi are also applied to Claim 15-16.
Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan in view of Manduchi and Choi as applied to Claim 6, in further view of Rapp (US 20060233105 A1).
Regarding Claim 8, Mohan in view of Manduchi and Choi teaches The method of claim 6, wherein selecting the display location comprises:
detecting a target object in the gaze field of view using the location information detected during the first period of time, wherein the target object is positioned outside of the display area (
the display area is mapped to the display area within the “Natural Gaze Angle.”
PNG
media_image10.png
240
384
media_image10.png
Greyscale
, where the target object is mapped to the icon at the location:
PNG
media_image11.png
36
38
media_image11.png
Greyscale
. The target object is positioned outside of the “Natural Gaze Angle,” mapped to the display area.
A user’s selection of
PNG
media_image11.png
36
38
media_image11.png
Greyscale
is based on the detection of
PNG
media_image11.png
36
38
media_image11.png
Greyscale
.); and
selecting the display location using After Mohan in view of Manduchi and Choi teaches that a confirmation area/flag is placed in Choi’s Kuiper Belt similar to the mail icon in Choi’s fig. 1. The confirmation area/flag is used to select the display location.
“By a second gaze on the confirmation flag, the user can confirm the selection. We design the confirmation flag in the form of a simple box, roughly a quarter of the menu button in size, and coloured distinguishably from the target button.” Mohan section 3.).
Mohan in view of Manduchi and Choi does not explicitly disclose a position of the target object is used to select the display location.
Rapp teaches a position of the target object is used to select the display location (
Rapp teaches target objects of “confirmation” and “rejection” of a selection, stating “By clicking on button 606, the confirmer can confirm the request, whereas by clicking on button 608, the confirmer can enter his or her rejection of the request.” Rapp ¶ 82.
The target object “confirmation” and “rejection” buttons/flags are placed at different locations. Therefore, the position of the target object (button or flag) is used to select the display location through confirmation or rejection.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Rapp’s confirmation and rejection buttons with Mohan in view of Manduchi and Choi. One of ordinary skill in the art would be motivated to provide a user with more options to explicitly express a range of preferences. Rapp teaches target objects of “confirmation” and “rejection” of a selection, stating “By clicking on button 606, the confirmer can confirm the request, whereas by clicking on button 608, the confirmer can enter his or her rejection of the request.” Rapp ¶ 82.
Claim 17 is substantially similar to Claim 8. Claim 8’s rejection analyses based on Mohan in view of Manduchi, Choi, and Rapp are also applied to Claim 17.
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mohan in view of Manduchi and Powderly et al. (US 20180307303 A1).
Regarding Claim 18, Mohan teaches A method of operating an electronic device (“The VR headset used for the study was FOVE VR developed by a Tokyo-based start-up. FOVE uses an infrared technology for accurate tracking of eye movements with low latency. Embedded sensors within the headset track the users’ pupils, while they interact with the objects on the screen.” Mohan Section 4.1.), comprising:
collecting, during a first period of time, location information of a gaze of a user within a gaze field of view (
PNG
media_image1.png
370
326
media_image1.png
Greyscale
, where
The location information is mapped to gaze location of
PNG
media_image2.png
54
52
media_image2.png
Greyscale
of DualGaze method in Fig. 1.
“The activation of a target onscreen choice is achieved using a quick successive two-gaze strategy. After the user’s gaze enters the target region (e.g., a ‘clickable’ button), a smaller pop-up box called the confirmation flag would appear right next to the target region.” Mohan section 3. The DualGaze Interaction Method.
The first period of time is mapped to duration when gaze focuses on the target region);
receiving a request to activate a GUI interface
Fig. 1 shows that there are two types of request: “DualGaze” and “Fixed Gaze” to confirm and activate a selection.); and
in response to receiving the request:
using an analysis technique based on the request type;
selecting an activation location of a display of the electronic device, wherein theactivation location is selected based on an analysis of the collected information using the selected analysis technique (
The analysis technique could be based on confirmation flag
PNG
media_image3.png
28
32
media_image3.png
Greyscale
of
PNG
media_image4.png
60
76
media_image4.png
Greyscale
based on DualGaze method in Fig. 1.
“By a second gaze on the confirmation flag, the user can confirm the selection. We design the confirmation flag in the form of a simple box, roughly a quarter of the menu button in size, and coloured distinguishably from the target button.” Mohan section 3.
“By a second gaze on the confirmation flag, the user can confirm the selection. We design the confirmation flag in the form of a simple box, roughly a quarter of the menu button in size, and coloured distinguishably from the target button.” Mohan section 3.
Here, the activation location of the target button/region is affirmatively/formally selected.
PNG
media_image5.png
306
366
media_image5.png
Greyscale
explicitly shows that there are multiple target/button (1-3) region locations to select from.
In the exmaple in fig. 2, the location of target/button 3 is selected after the DualGaze process.
Further, the selected analysis technique could also be “Fixed Gaze” technique as shown in fig. 1.)
Mohan does not explicit disclose
the GUI interface activated is a zoom reticle,
in response to receiving the request, selecting the analysis technique based on the request type, or
the activation location is a display location for displaying a zoom reticle at the display location, wherein the zoom reticle includes a magnified portion of the gaze field of view.
Manduchi teaches
the GUI interface activated is a zoom reticle (Manduchi fig. 1B; ¶ 121.),
the activation location is a display location for displaying a zoom reticle at the display location, wherein the zoom reticle includes a magnified portion of the gaze field of view (
“Block 404 represents connecting a computer to the display. In one example, the computer obtains gaze data from the gaze direction, maps the gaze data onto a desired location on the display corresponding to the gaze direction, provides a center of magnification on the display, and moves the center of magnification onto the desired location so as to magnify the content on the display. . . . determines a magnification area and an unmagnified area on the display so that the gaze point is within the magnified area, and instructs the display to magnify an image on the display in the magnification area so that the user can view the image comprising at least a portion of the content comprising text and/or graphics.” Manduchi ¶ 121.
PNG
media_image7.png
384
570
media_image7.png
Greyscale
).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Manduchi’s zoom reticle with Mohan. One of ordinary skill in the art would be motivated to help a user with vision difficulties. “The present disclosure relates to methods and systems for magnifying text or graphics on a screen to aid reading of the text or graphics.” Manduchi ¶ 2.
After the combination of Mohan and Manduchi, Manduchi’s magnification window/reticle is confirmed with Mohan’s technique to enhance reliability for gaze based selections.
Mohan in view of Manduchi does not explicitly disclose
in response to receiving the request, selecting the analysis technique based on the request type.
Powderly teaches
in response to receiving the request, selecting the analysis technique based on the request type (“Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The multiple inputs can also be used by the wearable system to permit a user to interact with text, such as, e.g., composing, selecting, or editing text.” Powderly Abstract.
Analyses for each type of input is selected to match the type of input.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Powderly’s multi-modal inputs with Mohan in view of Manduchi. One of ordinary skill in the art would be motivated to provide convenience to users, because the users may have different preferences. Some users may also be ones who have special needs. “Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The multiple inputs can also be used by the wearable system to permit a user to interact with text, such as, e.g., composing, selecting, or editing text.” Powderly Abstract
Regarding Claim 19, Mohan in view of Manduchi and Powderly teaches The method of claim 18, wherein: the request type is selected from a list of candidate request types that comprises a voice command type (“Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The multiple inputs can also be used by the wearable system to permit a user to interact with text, such as, e.g., composing, selecting, or editing text.” Powderly Abstract.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Powderly’s multi-modal inputs with Mohan in view of Manduchi. One of ordinary skill in the art would be motivated to provide convenience to users, because the users may have different preferences. Some users may also be ones who have special needs
Regarding Claim 20, Mohan in view of Manduchi and Powderly teaches The method of claim 19,
wherein the list of candidate request types comprises a gaze-based request type (“Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The multiple inputs can also be used by the wearable system to permit a user to interact with text, such as, e.g., composing, selecting, or editing text.” Powderly Abstract.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Powderly’s multi-modal inputs with Mohan in view of Manduchi. One of ordinary skill in the art would be motivated to provide convenience to users, because the users may have different preferences. Some users may also be ones who have special needs.
Allowable Subject Matter
Claims 4-5 and 13-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claims 4 and 13 are distinguished from Mohan in view of Manduchi, Aguilar, and Yoon because of the additional limitations recited in the claims and the combination with their parent claims:
select an updated display location of the display using the detected motion of the target object; and
move the zoom reticle to the updated display location.
Claims 5 and 14 are distinguished from Mohan in view of Manduchi, Aguilar, and Yoon because of the additional limitations recited in the claims and the combination with their parent claims:
changing a magnification level of the zoom reticle using the detected motion of the target object.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ashmore et al. (“Efficient Eye Pointing with a Fisheye Lens”):
PNG
media_image12.png
376
726
media_image12.png
Greyscale
Sato et al. (“GazeScope: Gaze Target Selection with a Magnifier in VR Environments”)’s invention is substantially similar to claimed independent claims. However, the publication date is after the effective filing of this case.
PNG
media_image13.png
258
786
media_image13.png
Greyscale
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZHENGXI LIU/Primary Examiner, Art Unit 2611
1 Claim 11 has been objected to. For the purposes of art rejections, the Examiner is treating the claim as depending on Claim 9.