DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The Amendment filed on 2/17/2026 has been received and entered. Application No. 18/274,092. Claims 1-47 & 64-65 are canceled. Claims 48-63 & 66-69 are now pending. Claims 48, 60, 67 have been amended. Claims 68 & 69 are new.
Response to Amendment
Applicant’s amendment necessitated new grounds of rejection.
This action is made final in view of the new grounds of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 48-55, 57, 59-63 & 66-69 is/are rejected under 35 U.S.C. 103 as being unpatentable over Scavezze et al. (U.S. Pub 2014/0320389) hereinafter Scave, in view of ZHI et al. (U.S. Pub 2021/0382497) hereinafter Zhi, in view of Boesel et al. (U.S. Pub 2023/0333641) hereinafter Boe.
As per Claim 48, Scave teaches A method comprising: at an electronic device including a non-transitory memory, one or more processors, an image sensor, one or more input devices, and a display: (Fig. 1, ¶2, ¶11 wherien the mixed reality interaction system 10 includes a mixed reality interaction program 14 that may be stored in mass storage 18 of a computing device 22. The mixed reality interaction program 14 may be loaded into memory 28 and executed by a processor 30 of the computing device 22 wherien the method includes providing a head-mounted display device operatively connected to a computing device, with the head-mounted display device including a display system for presenting the mixed reality environment and a plurality of input sensors including a camera for capturing an image of the physical object)
obtaining, from the image sensor, image data of a physical environment, (Fig. 2, ¶2, ¶11, ¶17 wherien The method includes providing a head-mounted display device operatively connected to a computing device, with the head-mounted display device including a display system for presenting the mixed reality environment and a plurality of input sensors including a camera for capturing an image of the physical object wherien a physical object is identified based on the captured image, and an interaction context is determined for the identified physical object based on one or more aspects of the mixed reality environment wherein the HMD device 36 includes a display system 48 and transparent display 44 that enables images such as holographic objects to be delivered to the eyes of a user 46. The transparent display 44 may be configured to visually augment an appearance of a physical environment 50 to a user 46 viewing the physical environment through the transparent display)
wherein the image data is associated with a first input modality; (Fig. 2, ¶20 wherein the eye-tracking system 62 includes a gaze detection subsystem configured to detect a direction of gaze of each eye of a user)
obtaining, based on the image data, a semantic value that is associated with a physical object within the physical environment; (Fig. 3, ¶31 wherien The optical sensor system 68 of the HMD device 200 may capture image data 74 from the office 308, including image data representing the photograph 316 and other physical objects in the office, such as table 324, book 328 on the table, basketball 332, bookcase 334 and coat rack 338. Image data 74 of one or more of these physical objects may be provided by the HMD device 200 to the mixed reality interaction program 14)
obtaining user data from the one or more input devices, wherein the user data is associated with a second input modality that is different from the first input modality; (Fig. 3, ¶22, ¶44 wherien Outward facing sensor 212 may detect movements within its field of view, such as gesture-based inputs or other movements performed by a user 46 wherein the mixed reality interaction program 14 may receive a user input from user 304 that is directed at the photograph 316. For example, the mixed reality interaction program 14 may receive eye-tracking data 66 from the HMD device 200 indicating that the user 304 is gazing at the photograph 316, as indicated by gaze line 336. In other examples, the user input may take one or more other forms including, for example, position data 76 and image data 74. The position data 76 may include head pose data indicating that the user 304 is facing the photograph 316. The image data 74 may include image data showing the user 304 pointing or gesturing at the photograph 316)
selecting a widget based on the semantic value and the user data; and displaying the widget on the display. (Fig. 3, ¶3, ¶52 wherein the user 304 may desire to manually switch between the Family Calendar interaction mode and the Family Reunion Planning interaction mode. The user may request that the current Family Calendar interaction mode be modified by, for example, speaking "Switch to Family Reunion Planning." The mixed reality interaction program 14 interprets this user input as a request to modify the interaction mode, and changes the interaction mode accordingly. The user may then point at the photograph 316, which is captured as image data 74 via the HMD device 200. The mixed reality interaction program 14 interprets this user input as corresponding to a virtual action 90 based on the Family Reunion Planning interaction mode, such as displaying the virtual family reunion To-Do List 360. The mixed reality interaction program 14 may then execute the virtual action and display the family reunion To-Do List 360 with a modified appearance wherein querying a stored profile for the physical object to determine a plurality of interaction modes for the physical object. The method includes programmatically selecting a selected interaction mode from the plurality of interaction modes based on the interaction context)
However, Scave does not explicitly teach obtaining, based on a semantic segmentation of the image data, a semantic value that is associated with a physical object within the physical environment.
Zhi teaches obtaining, based on a semantic segmentation of the image data, a semantic value that is associated with a physical object within the physical environment. (Fig. 10B, Fig. 12, ¶145, ¶151 wherien The robotic device 1010 also includes one or more actuators 1012 to enable the robotic device 1010 to interact with a surrounding three-dimensional environment wherien As the robotic device 1010 processes the image data, it may be arranged to obtain optimized latent representations from which segmentations and/or maps (e.g. semantic segmentations or depth maps) may be derived, e.g. to enable the robotic device 1010 to map its environment wherien the image data 1202 in the example of FIG. 12 also includes a ground-truth semantic segmentation 1208 of the input image. A ground-truth semantic segmentation for example includes a plurality of spatial elements, each associated with a respective portion of the input image. Each of the spatial elements is labelled with a semantic label, indicating the actual content of the respective portion of the input image. The actual content is for example the type or class of object that is present in the portion of the input image, e.g. “table”, “bed”, “chair”. )
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of scene representation using image processing of Zhi with the teaching of mixed reality interactions of Scave because Zhi teaches a system providing an improved mapping of an environment wherien the system includes an input interface to receive the image data, which is representative of at least one view of a scene. The system also includes an initialization engine to generate a first latent representation associated with a first segmentation of at least a first view of the scene, wherein the first segmentation is a semantic segmentation. The initialization engine is also arranged to generate a second latent representation associated with at least a second view of the scene. The system additionally includes an optimization engine to jointly optimize the first latent representation and the second latent representation, in a latent space, to obtain an optimized first latent representation and an optimized second latent representation. (abstract)
However, Scave as modified does not explicitly teach determining, based on the user data, an engagement score that characterizes a level of user engagement with respect to the physical object; in accordance with a determination that the engagement score satisfies an engagement threshold,
Boe teaches determining, based on the user data, an engagement score that characterizes a level of user engagement with respect to the physical object; (Fig. 3, ¶15, ¶69 wherein an electronic device, with a display, determines an engagement score associated with an object visible at the display wherein the electronic device utilizes extremity tracking and/or eye tracking in order to determining the engagement score wherein when the object is a physical object, the semantic identifier 422 obtains a semantic value associated with the object, such as obtaining a semantic value of “balloon” for the particular balloon 306 illustrated in FIG. 3C wherein the ambience vector generator 420 uses the “balloon” semantic value in order to determine an ambience vector 310 associated with “balloon,” as is illustrated in FIG. 3D.)
in accordance with a determination that the engagement score satisfies an engagement threshold, perform a function. (Fig. 3, ¶48, ¶56 wherein the engagement score satisfies the engagement criterion when the engagement score is indicative of a level of user engagement that satisfies a temporal threshold, such as eye gaze data indicating a gaze of an eye is directed to the object for more than two seconds wherien based on extremity tracking data from the tracker 240, the electronic device 210 determines that a finger of the user 50 is placed within or proximate to the particular balloon 306 for more than a threshold amount of time. The ambience vector 310 includes a first ambience value 310-1 of “Celebratory,” a second ambience value 310-2 of “Festive,” and a third ambience value 310-3 of “Fun,” because these ambiences are often associated with a balloon)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of ambience-driven user experience of Boe with the teaching of mixed reality interactions of Scave as modified because Boe teaches an improved method for determining an engagement score associated with an object visible at the display. The engagement score characterizes a level of user engagement with respect to the object. The method includes, in response to determining that the engagement score satisfies an engagement criterion, determining an ambience vector associated with the object and presenting content based on the ambience vector. The ambience vector represents a target ambient environment.(¶4)
As per Claim 49, the rejection of claim 48 is hereby incorporated by reference; Scave as modified further teaches wherein obtaining the semantic value includes determining the semantic value by semantically identifying the physical object within the image data. (Fig. 3, ¶31 wherien The optical sensor system 68 of the HMD device 200 may capture image data 74 from the office 308, including image data representing the photograph 316 and other physical objects in the office, such as table 324, book 328 on the table, basketball 332, bookcase 334 and coat rack 338. Image data 74 of one or more of these physical objects may be provided by the HMD device 200 to the mixed reality interaction program 14; as taught by Scave)
As per Claim 50, the rejection of claim 48 is hereby incorporated by reference; Scave as modified further teaches wherein the one or more input devices includes a positional sensor, wherein the user data includes positional data from the positional sensor, and (¶26 wherein the HMD device 36 may also include a position sensor system 72 that utilizes one or more motion sensors 220 to capture position data 76, and thereby enable motion detection, position tracking and/or orientation sensing of the HMD device. For example, the position sensor system 72 may be utilized to determine a direction, velocity and acceleration of a user's head. The position sensor system 72 may also be utilized to determine a head pose orientation of a user's head; as taught by Scave)
wherein the positional data indicates one or more positional values associated with the electronic device within the physical environment. (¶26 wherein position sensor system 72 may comprise an inertial measurement unit configured as a six-axis or six-degree of freedom position sensor system. This example position sensor system may, for example, include three accelerometers and three gyroscopes to indicate or measure a change in location of the HMD device 36 within three-dimensional space along three orthogonal axes (e.g., x, y, z), and a change in an orientation of the HMD device about the three orthogonal axes (e.g., roll, pitch, yaw). ; as taught by Scave)
As per Claim 51, the rejection of claim 50 is hereby incorporated by reference; Scave as modified further teaches wherein the one or more positional values indicate an orientation of the electronic device, and (¶26 wherein position sensor system 72 may comprise an inertial measurement unit configured as a six-axis or six-degree of freedom position sensor system. This example position sensor system may, for example, include three accelerometers and three gyroscopes to indicate or measure a change in location of the HMD device 36 within three-dimensional space along three orthogonal axes (e.g., x, y, z), and a change in an orientation of the HMD device about the three orthogonal axes (e.g., roll, pitch, yaw). ; as taught by Scave)
wherein selecting the widget is based on the orientation of the electronic device. (¶34 , ¶41 wherien The mixed reality interaction program 14 also determines an interaction context 84 for the framed photograph 316 based on one or more aspects of the mixed reality environment 38 wherien The mixed reality interaction program 14 may then interpret the user 304 gazing at the photograph 316 to correspond to a virtual action 90. The virtual action 90 may be based on the selected interaction mode, in this example the Family Calendar interaction mode. The virtual action 90 may comprise presenting to the user 304 a virtual object in the form of the user's family calendar stored in a calendar application. It will be appreciated that the user's family calendar is associated with the photograph 316 of the user's spouse; as taught by Scave)
As per claim 52, the rejection of claim 50 is hereby incorporated by reference; Scave as modified further teaches wherein the one or more positional values indicate a movement of the electronic device, and (¶26 wherein position sensor system 72 may comprise an inertial measurement unit configured as a six-axis or six-degree of freedom position sensor system. This example position sensor system may, for example, include three accelerometers and three gyroscopes to indicate or measure a change in location of the HMD device 36 within three-dimensional space along three orthogonal axes (e.g., x, y, z), and a change in an orientation of the HMD device about the three orthogonal axes (e.g., roll, pitch, yaw). ; as taught by Scave)
wherein selecting the widget is based on the movement of the electronic device. (¶34 , ¶41 wherien The mixed reality interaction program 14 also determines an interaction context 84 for the framed photograph 316 based on one or more aspects of the mixed reality environment 38 wherein with reference again to FIG. 1, such aspects of the mixed reality environment 38 may include one or more data feeds 86 originating from the mixed reality environment 38 or externally to the mixed reality environment. Data feeds 86 may include location data wherien The mixed reality interaction program 14 may then interpret the user 304 gazing at the photograph 316 to correspond to a virtual action 90. The virtual action 90 may be based on the selected interaction mode, in this example the Family Calendar interaction mode. The virtual action 90 may comprise presenting to the user 304 a virtual object in the form of the user's family calendar stored in a calendar application. It will be appreciated that the user's family calendar is associated with the photograph 316 of the user's spouse; as taught by Scave)
As per Claim 53, the rejection of claim 50 is hereby incorporated by reference; Scave as modified further teaches wherein the positional sensor corresponds to an inertial measurement unit (IMU), and the positional data includes IMU data from the IMU. (¶26 wherien position sensor system 72 may comprise an inertial measurement unit configured as a six-axis or six-degree of freedom position sensor system. This example position sensor system may, for example, include three accelerometers and three gyroscopes to indicate or measure a change in location of the HMD device 36 within three-dimensional space along three orthogonal axes (e.g., x, y, z), and a change in an orientation of the HMD device about the three orthogonal axes (e.g., roll, pitch, yaw). ; as taught by Scave)
As per Claim 54, the rejection of claim 50 is hereby incorporated by reference; Scave as modified further teaches wherein the positional sensor corresponds to a global positioning system (GPS) sensor, and the positional data includes GPS data from the GPS sensor. (¶27 wherein position sensor system 72 may also support other suitable positioning techniques, such as GPS or other global navigation systems; as taught by Scave)
As per Claim 55, the rejection of claim 48 is hereby incorporated by reference; Scave as modified further teaches wherein the one or more input devices includes an audio sensor, wherein the user data includes audio data from the audio sensor, and (¶27 wherein The HMD device 36 may also include a microphone system 80 that includes one or more microphones 224 that capture audio data 82 wherein; as taught by Scave)
wherein selecting the widget is based on the audio data. (¶52 wherein the user 304 may desire to manually switch between the Family Calendar interaction mode and the Family Reunion Planning interaction mode. The user may request that the current Family Calendar interaction mode be modified by, for example, speaking "Switch to Family Reunion Planning." The mixed reality interaction program 14 interprets this user input as a request to modify the interaction mode, and changes the interaction mode accordingly; as taught by Scave)
As per Claim 57, the rejection of claim 48 is hereby incorporated by reference; Scave as modified further teaches wherein selecting the widget based on the semantic value and the user data includes: in accordance with a determination that the user data indicates a first context value, selecting a first widget based on the semantic value and the first context value; and (Fig. 3, ¶41 wherien The mixed reality interaction program 14 may then interpret the user 304 gazing at the photograph 316 to correspond to a virtual action 90. The virtual action 90 may be based on the selected interaction mode, in this example the Family Calendar interaction mode. The virtual action 90 may comprise presenting to the user 304 a virtual object in the form of the user's family calendar stored in a calendar application. It will be appreciated that the user's family calendar is associated with the photograph 316 of the user's spouse; as taught by Scave)
in accordance with a determination that the user data indicates a second context value, selecting a second widget based on the semantic value and the second context value, wherein the first widget is different from the second widget. (Fig. 3,¶45 wherein the user 304 may use his right hand 344 to point at the basketball 332 near wall 348. The mixed reality interaction program 14 may identify the basketball, determine an interaction context for the basketball, query a stored profile for the basketball, and programmatically select a selected interaction mode based on the interaction context in a manner similar to that described above for the photograph 316. A camera in the HMD device 200 may capture image data 74 showing the user's right hand 344 pointing at the basketball 332. The mixed reality interaction program 14 may interpret this user input to correspond to displaying a website of the user's favorite basketball team; as taught by Scave)
As per Claim 59, the rejection of claim 48 is hereby incorporated by reference; Scave as modified further teaches wherein the widget is displayed world- locked to the physical object. (Fig. 3, ¶44 wherein the calendar 340 may be displayed just above the photograph 316 such that when the user 304 gazes at the photograph, the calendar is presented in an easily viewable location just above the photograph. Being geo-located to the photograph 316, the calendar 340 may remain "tethered" to the photograph and may track the location of the photograph in the office 308. Advantageously, if the photograph 316 is moved to another location in the office 308, the user 304 may still easily recall and view the calendar 340 by gazing at the photograph 316; as taught by Scave)
Claim 60 is similar in scope to Claim 48; therefore, Claim 60 is rejected under the same rationale as Claim 48.
Claim 61 is similar in scope to Claim 50; therefore, Claim 61 is rejected under the same rationale as Claim 50.
Claim 62 is similar in scope to Claim 51; therefore, Claim 62 is rejected under the same rationale as Claim 51.
Claim 63 is similar in scope to Claim 52; therefore, Claim 63 is rejected under the same rationale as Claim 52.
Claim 66 is similar in scope to Claim 57; therefore, Claim 66 is rejected under the same rationale as Claim 57.
Claim 67 is similar in scope to Claim 48; therefore, Claim 67 is rejected under the same rationale as Claim 48.
As per Claim 68, the rejection of claim 48 is hereby incorporated by reference; Scave as modified further teaches further comprising determining engagement scores for a plurality of physical objects and selectively selecting the widget for the physical object when its engagement score exceeds the engagement threshold while foregoing selection for other physical objects whose engagement scores do not exceed the engagement threshold. (Fig. 3A-3D, Fig. 5. Fig. 6, ¶54, ¶55, ¶66, ¶79 wherien the viewable region 214 of the display 212 includes balloons 304 resting atop of a table 302. corresponding to physical objects wherein the electronic device identifies a particular balloon 306 of the balloons 304 based on the one or more tracking functions wherien the electronic device 210 determines an engagement score associated with the particular balloon 306 wherien the ambience vector generator 420 includes a semantic identifier 422 that aids the ambience vector generator 420 in determining the ambience vector. For example, when the object is a physical object, the semantic identifier 422 obtains a semantic value associated with the object, such as obtaining a semantic value of “balloon” for the particular balloon 306 illustrated in FIG. 3C. Continuing with this example, the ambience vector generator 420 uses the “balloon” semantic value in order to determine an ambience vector 310 associated with “balloon wherien in response to determining that the engagement score satisfies the engagement criterion, determining an ambience vector associated with the object. The ambience vector represents a target ambient environment; as taught by Boe; Fig. 3, ¶3, ¶52 wherein the mixed reality interaction program 14 interprets this user input as corresponding to a virtual action 90 based on the Family Reunion Planning interaction mode, such as displaying the virtual family reunion To-Do List 360. The mixed reality interaction program 14 may then execute the virtual action and display the family reunion To-Do List 360 with a modified appearance wherein querying a stored profile for the physical object to determine a plurality of interaction modes for the physical object. The method includes programmatically selecting a selected interaction mode from the plurality of interaction modes based on the interaction context,; as taught by Scave)
As per Claim 69, the rejection of claim 48 is hereby incorporated by reference; Scave as modified further teaches further comprising, in response to detecting user input directed to a first displayed widget, reducing the engagement score associated with the physical object and ceasing to display a second widget associated with the physical object when the engagement score falls below the engagement threshold. (Fig. 5, Fig . 6, ¶76, ¶78, ¶85 wherien As represented by block 502, the method 500 includes determining an engagement score associated with an object visible at a display. The engagement score characterizes a level of user engagement with respect to the object. The engagement score may characterize the extent to which a user is focused on the object, such as for how long the user is focused on the object, how often focus is diverted to a different object, etc. wherein he method 500 includes determining whether or not the engagement score satisfies an engagement criterion such as the tracker 240 provides tracking data that indicates that an eye gaze of the user 50 has been directed at the particular raindrop 244 for more than the threshold amount of time (e.g., more than three seconds). In response to determining that the engagement score satisfies the engagement criterion, the method 500 proceeds to a portion of the method 500 represented by block 510. On the other hand, in response to determining that the engagement score does not satisfy the engagement criterion, the method 500 reverts back to the portion of the method 500 represented by block 502; Examiner interprets the cyclic function to allow for the score to fall below a threshold and to restart the engagement score assessment. as taught by Boe)
Claim(s) 56 is/are rejected under 35 U.S.C. 103 as being unpatentable over Scave in view of Zhi in view of Boe, as applied to claim 55 above, and further in view of Smith et al. (U.S. Pub 2019/0384407) hereinafter Smith.
As per Claim 56, the rejection of claim 55 is hereby incorporated by reference; Scave as modified previously taught selecting the widget. However, Scave as modified did not explicitly teach wherein selecting the widget based on the audio data includes determining that the audio data satisfies an audio pattern criterion.
Smith teaches wherein selecting the widget based on the audio data includes determining that the audio data satisfies an audio pattern criterion. (Fig. 1, Fig. 6, ¶100, ¶101 wherein At block 601, method 600 composes an audio signal with a selected number of non-audible, ultrasonic frequencies (e.g., 3 different frequencies). Block 602 repeats the audio signal every N milliseconds, such that Doppler detection and recognition of audio patterns starts wherien At block 603, user 101 performs a gesture sequence to enable a selected action, records the received audio signal, de-noises it, and filters the discrete frequencies)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of gesture sequences in mixed reality of Smith with the teaching of mixed reality interactions of Scave as modified because Smith teaches an improved mixed reality application enabling two-handed gesture sequences in virtual, augmented, and mixed reality (xR) applications are described. In an illustrative, non-limiting embodiment, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: receive a gesture sequence from a user wearing an HMD coupled to the IHS, where the HMD is configured to display an xR application; identify the gesture sequence as a two-handed sequence; and execute a command in response to the two-handed sequence. (¶5)
Claim(s) 58 is/are rejected under 35 U.S.C. 103 as being unpatentable over Scave in view of Zhi in view of Boe, as applied to claim 48 above, and further in view of LOFORTE et al. (U.S. Pub 2020/0387485) hereinafter Loforte.
As per Claim 58, the rejection of claim 48 is hereby incorporated by reference; Scave as modified previously taught selecting the widget. However, Scave as modified does not explicitly teach wherein selecting the widget is further based on a permission level.
Loforte teaches wherein selecting the widget is further based on a permission level. (Fig. 5, Fig. 6, ¶31 wherien When the user device receives a selection of the widget 515 (or a button, link, or the like within the widget) and the user device has received log-on credentials for a user with permission to edit product documentation, the user device displays options for editing the product documentation)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of customizing product documentation of Loforte with the teaching of mixed reality interactions of Scave as modified because Loforte teaches an improved customizing product documentation by receiving, from a user device, log-on credentials, determining, based on the log-on credentials, an organization, and receiving, from a database server, product documentation for a product. The method also includes receiving, from the database server, modifications for the product documentation that are associated with the organization, applying the modifications to the product documentation to create modified product documentation, and sending, to the user device, the modified product documentation. (Abstract, ¶6)
Response to Arguments
Applicant's arguments have been considered but are moot in view of the new ground(s) of rejection wherien Boe is relied upon to teach the limitation reciting “determining, based on the user data, an engagement score that characterizes a level of user engagement with respect to the physical object; in accordance with a determination that the engagement score satisfies an engagement threshold.”
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-Form 892 for listed of cited references.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGIE BADAWI whose telephone number is (571)270-7590. The examiner can normally be reached Monday thru Wednesday 9:00am - 5:00pm EST with Thursdays and Fridays off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANGIE BADAWI/Primary Examiner, Art Unit 2179