DETAILED ACTION
Response to Amendment
Applicant’s amendments filed on 10 October 2025 have been entered. Claims 1, 19, and 20 have been amended. Claims 22-25 has been added. Claim 16 has been canceled. Claims 2, 12, and 13 have been previously canceled. Claims 1, 3-11, 14, 15, and 17-25 are still pending in this application, with claim 1 being independent.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10 October 2025 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-7, 10, 14, 15 and 17-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mittal et al. (US 20190025909 A1), referred herein as Mittal in view of WOO et al. (US 20180210627 A1), referred herein as WOO and HASHIMOTO et al. (US 20220137705 A1), referred herein as HASHIMOTO.
Regarding Claim 1, Mittal in view of WOO teaches an information processing apparatus comprising: a processor; and a memory storing a program which, when executed by the processor, causes the information processing apparatus to: (Mittal Abst: Methods, systems, computer-readable media, and apparatuses for selecting an Augmented Reality (AR) object on a head mounted device (HMD); FIG. 9: 910, 935):
execute display control processing of performing display control of a virtual object so that the virtual object is disposed in a three-dimensional space which becomes a visual field of a user (Mittal [0027] HMD control system being performed on the processor; [0026] an HMD with augmented reality (AR) capabilities can place images of both the physical world and virtual objects over the user's field of view; [0035] a user can select the geo-located POIs or AR targets in a two or three dimensional space by further specifying the ROI or volume-of-interest (VOI), as shown in FIG. 3B, FIG. 5 and FIG. 7; [0036] the HMD can recognize the position of the user's hands in relations to the target, and therefore display different augmentations based on the position; [0037] if multiple augmentations are associated with a given AR target, the user can browse through them by scrolling in the direction of the target using VOI); and
execute selection processing of setting the virtual object, included in a selection range in the three-dimensional space, to a selected state, the selection range being obtained by expanding, in a depth direction, a two-dimensional selected region specified using an operation body at a position of a hand of the user (Mittal [0036] a depth-enabled camera (e.g., stereo camera) on the HMD can be used for using VOI to select an augmentation when the AR object has multiple augmentations associated with it. The depth-enabled camera can recognize the movement of the hands in front of the user or in the camera view. With these cameras, the HMD can recognize the position of the user's hands in relations to the target, and therefore display different augmentations based on the position; [0037] a user can select a specific layer of augmentation for AR targets, as further described in the flowchart illustrated by FIG. 6. First, a user can select a specific AR target by selecting the ROI; [0045] The HMD 120 may be configured to recognize user inputs, which may be made through gestures that may be imaged by the camera. A distance to the recognized object within the image may be determined from data gathered from the captured image and distance sensors; [0047] The camera(s) 150 (e.g., outward-facing cameras) can capture images of the user's surroundings, including the user's hand 130 and/or other objects that can be controlled by the user 110 to provide input to the HMD 120).
Mittal does not but WOO teaches
wherein the selection range is a three-dimensional range, expanded as a frustum-shaped space, which is enclosed by a plurality of lines extending from an origin corresponding to a position of the user, in a direction of a field-of-view of the user, and passing through points on a contour of the two-dimensional selected region (WOO [0041] All real and virtual objects that are far apart reside here and have transformation related to user coordinates. Transformation of the selected subspace is also created in this space. The size of the frustum is determined by a clipping distance of the virtual camera. In fact, this is tightly coupled with its performance for an available tracker and a target augmented space; [0102] The first transformation unit 324 transforms the frustum acquired by performing the forward casting and the backward casting to acquire the first subspace of the first 3D shape (S930). That is, the first transformation unit 324 acquires a first subspace by transformation into a first 3D shape having the same position, rotation, and volume according to the definition of a subspace predetermined based on a center of a frustum including a first plane, which is the slice plane parallel to the tunnel including the first collision point nearest from the eye, and the second plane, which is the slice plane parallel to the tunnel including the first collision point farthest from the eye).
Mittal does not but HASHIMOTO teaches
setting, among a plurality of virtual objects displayed in the three-dimensional space, the virtual object (HASHIMOTO [0228] FIG. 35 shows another display control example related to a three-dimensional grid. In an image 351 of (A), a three-dimensional grid K1 is displayed in the display surface 5. Similarly, this grid K1 has three grid surfaces (grid surfaces SF1, SF2, and SF3) in a depth direction. For example, the virtual objects are arranged on a front-side grid surface SF1 and an intermediate grid surface SF2, respectively. Virtual objects v11, v12, and V13 are arranged in a central row of the grid surface SF1. Virtual objects V1, V2, and V3 are arranged in a central row of the grid surface SF2. The ID mark M1 of the point P1 is not displayed on the grid K1); and
wherein, in the display control processing, in a state where the virtual object not included in the selection range is displayed, the virtual object included in the selection range is displayed in a mode different from the virtual object not included in the selection range (HASHIMOTO [0230] As another control example, when the user selects and operates a certain grid surface, the HMD 1 may display only the virtual object on the grid surface in a normal state and may not be display the virtual object on another grid surface in a transparent state. Further, the HMD 1 may put all of the points P1 and the grid lines, etc. on the other grid surfaces into non-displayed states. Alternatively, the HMD 1 may display the virtual objects on all grid surfaces in front of the selected grid surface in transparent states, or may put all the points P1 and the like in the non-displayed states; [0114] an operation for designating the target virtual object and the point P1 may be configured, in detail, separately for provisional selection and selective determination. (D) shows an example of a change in display states of the virtual object V1 due to pre-selection (non-selection), provisional selection, and selective determination).
WOO discloses a system and a method for acquiring a subspace (partial space) in an augmented space, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mittal to incorporate the teachings of WOO, and apply the subspace selection performed within a single view frustum into the head mounted device (HMD) that defines a Region-of-Interest (ROI) based on a gesture formed by at least one hand of a user.
Doing so would acquire a subspace of a 3D shape, of which position and scale are adjusted according to a user's gesture.
HASHIMOTO discloses a technique for arranging a virtual object in a real-space scene with respect to VR, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mittal to incorporate the teachings of HASHIMOTO, and apply selecting of virtual objects in VR space into the head mounted device (HMD) that defines a Region-of-Interest (ROI) based on a gesture formed by at least one hand of a user.
Doing so would improve the usability of arranging a large number of virtual objects in a display surface seen from the user's viewpoint.
Regarding Claim 3, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the operation body is the hand of the user, and the two-dimensional selected region is a region enclosed by traces of a position of a hand or a position of a fingertip of the user (Mittal [0041] FIG. 1A is simplified illustration 100 of an HMD 120 configured to define a region-of-interest (ROI) based on a user's gestures, according to one embodiment. In this embodiment, an HMD 120 worn by a user 110 has a camera and/or other sensor(s) configured to track the user's hand 130 in order to define the ROI).
Regarding Claim 4, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the operation body is the hand of the user, and the two-dimensional selected region is a region specified based on position information on a plurality of points specified by the hand of the user (Mittal [0054] At block 210, a shape outlining the ROI within the transparent display area is defined and displayed to the user. The shape can be, for example, a rectangular overlay outlining the ROI on the HMD 120. The shape can give the user the visual feedback of the ROI's location).
Regarding Claim 5, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the operation body is the hand of the user, and the two-dimensional selected region is a region enclosed by a predetermined shape of the hand of the user (Mittal [0057] Furthermore, the processor may track the user's hand 130 over a time interval to determine if a predetermined gesture is recognized. In this determination, the processor may determine whether any gestures are recognized in the field of view; [0058] According to one embodiment, once the ROI is identified, a picture and/or video can be capture of the image inside the ROI when the user 110 disengages. For example, if the user continues to maintain the hand gesture after receiving the shape outlining the ROI).
Regarding Claim 6, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 5, and further teaches wherein the predetermined shape of a hand is a shape formed by approaching two fingertips of one hand toward two fingertips of another hand respectively, or a shape formed by approaching two fingertips of one hand to each other (Mittal [0035] The gesture can involve pointing both hands' index and thumb fingers in the orthogonal direction as shown in FIG. 3A and FIG. 4).
Regarding Claim 7, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the selection range is set using an origin corresponding to a position of the user, and a range from the origin in a direction toward which the operation body faces (Mittal [0036] According to another embodiment, a depth-enabled camera (e.g., stereo camera) on the HMD can be used for using VOI to select an augmentation when the AR object has multiple augmentations associated with it. The depth-enabled camera can recognize the movement of the hands in front of the user or in the camera view).
Regarding Claim 10, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the origin corresponding to the position of the user is a mid-point between a left eye and a right eye of the user (Mittal [0042] positioning the displays 140 in front of a user's eyes; [0043] the camera 150 may be a head mounted camera, which can generate image data that a processor can analyze to estimate distances to objects in the image through trigonometric analysis of the images; FIG. 1A).
Regarding Claim 14, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein in the selection processing, specifies a range of the selection range in the depth direction is specified based on a predetermined operation by the user (Mittal [0051] A detection algorithm can be used to detect a user's gesture. The HMD can detect a predetermined gesture of a user and define the ROI based on the user's gesture. For example, a user's gesture can include pointing the index and thumb fingers of both hands in the orthogonal direction to create a rectangular shape as illustrated in FIGS. 3A and 3B. In other instances, the user's gesture can include a fist, an open hand, pointing with finger(s), a hand rotation, a wave, a movement of one or more fingers, and any combination thereof).
Regarding Claim 15, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 14, and further teaches wherein the predetermined operation is an operation of moving the position of the hand of the user in the depth direction, and in the selection processing, the selected state is sequentially canceled from the virtual object on a rear side as the hand of the user moves toward the user, and the selected state is sequentially canceled from the virtual object on a front side as the hand of the user moves away from the user (Mittal [0095] FIG. 6 is a flow diagram illustrating an embodiment of a method 600 of defining a volume-of-interest (VOI) using hand gestures along the direction of the target (e.g., z-axis). For example, the user can browse through the geo-located points or different augmentations for the same target by scrolling one or more hands along the z-axis; [0100] As illustrated in the different positions in FIG. 7, different augmentations can be displayed to the user based on the user's hand position. For example, when the user's hand position is at 705, the display 140 of the HMD 120 shows the name of the movie and the reviews. Position 710 can occur when the user's hand position moves closer in the direction of the target 720 (e.g., z-axis) in relations to position 705. At position 710, in this example, the augmentation for playing the trailer of the movie is shown on the display 140. As the user's hand position moves closer to the target, at position 715, the show times of the movie and the option to purchase tickets online can be displayed. Finally, in this example, at the position closest to the target 720, the HMD 120 can display images associated with the movie).
Regarding Claim 17, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein in the selection processing, in a case where the selection range is changed, the virtual object, which is not included in the selection range, is set to a deselected state (Mittal [0092] FIG. 5 illustrates an example of the ROI 305 being reduced by the user, which results in less AR targets or POIs being shown on the display 140. As described in block 220, a user can reduce the size of the ROI 305 using hand gestures (e.g., by moving hands closer). Using the reduced-sized, the display 140 in FIG. 5 now only has three 3 POIs (Starbucks 505, Henry's Diner 510, Henry's Diner 515) inside the ROI. By reducing the size of the ROI, the number of targets inside the ROI has been reduced from five POIs in FIG. 4 to three POIs in FIG. 5. According to another embodiment, the user can further reduce the size of the ROI so that only one AR target 320 is inside the ROI 305).
Regarding Claim 18, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the operation body is the hand of the user, and in the selection processing the position of the hand of the user is acquired by acquiring a position of a hand or a fingertip of the user from an image captured by a camera, or based on position-and-posture information of a controller held by the hand of the user (Mittal [0098] At block 615, the system displays to the user different augmentations by associated with the selected target based on user's hand position (e.g., along the z-axis as illustrated in FIG. 7); [0101] As illustrated by the example in FIGS. 6 and 7, the HMD 120 can use hand positions and/or hand gestures to define a VOL Based on the VOI, the HMD 120 can display different augmentations associated with a selected target. Alternatively, once a target is selected, the HMD 120 can implement other modes based on user's preferences or predetermined gesture recognized functions, as illustrated in FIG. 8).
Regarding Claims 19 and 20, Mittal in view of WOO and HASHIMOTO teaches an information processing method, and a non-transitory computer-readable medium that stores a program for causing a computer to execute an information processing method (Mittal Abst: Methods, systems, computer-readable media, and apparatuses for selecting an Augmented Reality (AR) object on a head mounted device (HMD)).
The metes and bounds of the claims 19 and 20 substantially correspond to the elements set forth in claim 1; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Regarding Claim 21, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein a range on a plane, which is parallel to the two-dimensional selected region within the selection range in the three-dimensional space, is determined on a basis of a distance from the origin along the depth direction (WOO [0041] The size of the frustum is determined by a clipping distance of the virtual camera. In fact, this is tightly coupled with its performance for an available tracker and a target augmented space… Second, the z distance of the IsoScale plane is proportional to a clipping plane of the ESD; [0091] Specifically, the adjustment unit 331 performs the relative scale mapping on the two planes of the inclusive frustum according to the two-handed pinch gesture. This is done for effectively slicing the inclusive frustum having a variable depth in a predetermined motion space. Therefore, when a physical distance of the inclusive frustum is determined by the targeting unit 320, a scale mapping ratio may be slightly changed according to an allowable depth measurement range of the pinch input).
Regarding Claim 22, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein in the display control processing, the virtual object not included in the selection range is displayed without changing a display mode, and the virtual object included in the selection range is displayed with a thicker contour than the virtual object not included in the selection range, or as a wire frame (HASHIMOTO [0114] The HMD 1 changes the display state of the virtual object, which has undergone the operation of the selective determination, so as to become a predetermined display state (for example, a specific color, shape, and size, etc.) indicating the selective determination. In this example, the colors are changed, but a frame or the like surrounding the virtual object of the selective determination may be displayed. Incidentally, a method of omitting the provisionally selected state is also possible).
Regarding Claim 23, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein in the display control processing, the virtual object not included in the selection range is displayed without changing a display mode, and the virtual object included in the selection range is displayed in a color or transparency different from that of the virtual object not included in the selection range (HASHIMOTO [0230] the user selects and operates a certain grid surface, the HMD 1 may display only the virtual object on the grid surface in a normal state and may not be display the virtual object on another grid surface in a transparent state. Further, the HMD 1 may put all of the points P1 and the grid lines, etc. on the other grid surfaces into non-displayed states. Alternatively, the HMD 1 may display the virtual objects on all grid surfaces in front of the selected grid surface in transparent states, or may put all the points P1 and the like in the non-displayed states).
Regarding Claim 24, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the selected region is a region approximated to a predetermined shape based on a region specified by the user (Mittal [0033] defining a ROI based on user's gesture by capturing spatial data with one or more head mounted sensors, displaying a shape outlining the ROI on the display of the HMD, calculating parameters including distance with respect to the HMD that corresponds to the AR targets, displaying a plurality of AR objects within the ROI, reducing the size of the ROI based on user's hand movement and using a reduced-sized ROI to select a specific AR target).
Regarding Claim 25, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 14, and further wherein the predetermined operation is a Pinch-In or Pinch-Out operation of a hand gesture (WOO [0063] The present invention more quickly detects a pinch tip position by a pinch hole from a depth sensor (depth image acquisition device) attached to the HWD and combines the pinch tip position with a 3D coordinate system based on the HWD. [0064] FIG. 4 illustrates the result of the implementation of a positional 3DoF pinch tip input capable of distinguishing both hands from each other).
Claim(s) 8 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mittal et al. (US 20190025909 A1), referred herein as Mittal in view of WOO et al. (US 20180210627 A1), referred herein as WOO, HASHIMOTO et al. (US 20220137705 A1), referred herein as HASHIMOTO and Broughton et al. (US 20240152245 A1), referred herein as Broughton.
Regarding Claim 8, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, and further teaches wherein the origin corresponding to the position of the user Mittal [0042] positioning the displays 140 in front of a user's eyes; [0043] the HMD 120 may include one or more distance measuring sensors (e.g., a laser or sonic range finder) that can measure distances to various surfaces within the image. In the various embodiments a variety of different types of distance measuring sensors and algorithms may be used an imaged scene to measure for measuring distances to objects within a scene viewed by the user 110).
Mittal does not but Broughton teaches a position of either a left or a right eye of the user (Broughton [0199] As shown in FIG. 5, the eye tracking device 130 (e.g., 130A or 130B) includes eye lens(es) 520, and a gaze tracking system that includes at least one eye tracking camera 540 (e.g., infrared (IR) or near-IR (NIR) cameras) positioned on a side of the user's face for which eye tracking is performed, and an illumination source 530 (e.g., IR or NIR light sources such as an array or ring of NIR light-emitting diodes (LEDs)) that emit light (e.g., IR or NIR light) towards the user's eye(s) 592).
Broughton discloses an electronic devices that provide virtual reality and mixed reality experiences via a display, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mittal to incorporate the teachings of Broughton, and apply the at least one eye tracking unit into the head mounted device (HMD) that defines a Region-of-Interest (ROI) based on a gesture formed by at least one hand of a user.
Doing so would provide improved methods and interfaces for providing users with conditionally displayed controls and user interface elements that indicate information about content to make interaction with the computer systems more efficient and intuitive for a user.
Regarding Claim 11, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 1, but does not teach all the claimed limitations herein.
However, Mittal in view of Broughton teaches further comprising a camera that captures a first image for a right eye and a second image for a left eye, wherein in the display control processing the first image or the second image is displayed on a first display for the right eye and a second display for the left eye (Broughton [0135] In at least one example, the first and second optical modules 11.1.1-104a-b can include respective display screens configured to project light toward the user's eyes when donning the HMD 11.1.1-100 ... The optical modules 11.1.1-104a-b can also include one or more cameras or other sensors/sensor systems for imaging and measuring the IPD of the user such that the optical modules 11.1.1-104a-b can be adjusted to match the IPD) in a case where operation by the operation body to set the selection range is started (Mittal [0026] The processor may commence operation by receiving sensor data regarding an orientation of the HMD 120. Additionally, the processor may receive image data from the cameras 150, as well as data from other sensors included in the HMD 120), and
display of the first display and the second display is returned back to display of the first image and display of the second image respectively in a case where the virtual object is selected the selection processing (Mittal [0066] enabling interactions with elements of digital content, from the perspective of the user, to the selected object. Because these interactive elements are bound to selected objects in the user's surroundings, corresponding ROI on the HMD's display 140 can move and scale relative to the selected object's position. Additionally, according to some embodiments, the selected AR object can then be further manipulated; [0079] [0079] The HMD 120, after recognizing selection of the gesture, can display to the user a shape 315 on the HMD. The shape 315 (e.g., rectangular overlay) can outline the ROI 305).
The same motivation as claim 8 applies here.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mittal et al. (US 20190025909 A1), referred herein as Mittal in view of WOO et al. (US 20180210627 A1), referred herein as WOO, HASHIMOTO et al. (US 20220137705 A1), referred herein as HASHIMOTO, Broughton et al. (US 20240152245 A1), referred herein as Broughton and VDOVYCHENKO et al. (US 20240104867 A1), referred herein as VDOVYCHENKO.
Regarding Claim 9, Mittal in view of WOO and HASHIMOTO teaches the information processing apparatus according to claim 8, but does not teach the claimed limitations herein.
However, VDOVYCHENKO teaches wherein the program which, when executed by the processor, further causes the information processing apparatus to execute setting processing of setting a dominant eye of the user, wherein the origin corresponding to the position of the user is a position of the dominant eye of the user which is set in the setting processing (VDOVYCHENKO [0068] According to one embodiment, the electronic device 101 may measure the distance to the object positioned in the front direction of the electronic device 101 using the front camera 213 …The electronic device 101 may detect an eye corresponding to the dominant eye and/or the non-dominant eye of the left eye and/or the right eye using at least one camera. For example, the electronic device 101 may detect the eye corresponding to the dominant eye and/or the non-dominant eye, based on the user's gaze direction with respect to the external object and/or the virtual object).
VDOVYCHENKO discloses an electronic device includes receiving a plurality of operation contexts for the wearable electronic device to perform an operation according to an operation context, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Mittal to incorporate the teachings of VDOVYCHENKO, and apply the dominant eye tracking unit into the head mounted device (HMD) that defines a Region-of-Interest (ROI) based on a gesture formed by at least one hand of a user.
Doing so would provide maximum performance with less power consumption from the perspective of a computer vision solution.
Response to Arguments
Applicant’s arguments, filed on 10 October 2025 with respect to103 rejection on claims 1, 19 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Samantha (YUEHAN) WANG/
Primary Examiner
Art Unit 2617