DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This office action is in response to communications filed 10/24/2025. Claims 1-2, 7-812-13, and 18-19 are amended. Claims 3, 14 and 21-55 are cancelled. Claims 1-2, 4-13, 15-20 and 56-57 are pending in this action.
Response to Arguments
Applicant's arguments filed 10/24/2025 have been fully considered but are not persuasive.
In response to Applicants’ arguments on pages 9-10 that “Crocker in view of Kavallierou in further view of Tichenor fails to teach or suggest inter alia "generating, based at least in part on a character identified in the content item, an extended reality avatar; placing, based at least in part on the mapping, the extended reality avatar in a virtual space location that corresponds to a physical structure of the one or more physical structures in the physical space" and "performing, based at least in part on the pre-set setting, one or more adaptation operations on the extended reality avatar, wherein the one or more adaption operations adapt the extended reality avatar to the one or more physical structures" as recited in claim 1, as amended. Crocker, Kavallierou and Tichenor are silent on the aforementioned features”, the Examiner respectfully disagrees. The Applicants should please note that at least Crocker teaches:
generating, based at least in part on a character identified in the content item, an extended reality avatar (i.e., object 416, see Crocker, at least at [0048], Fig. 4, and other related text);
placing, based at least in part on the mapping, the extended reality avatar in a virtual space location (see Crocker, at least at [0048], Fig. 4, and other related text), and Tichenor at least teaches:
placing, based at least in part on mapping, an extended reality avatar in a virtual space location that corresponds to a physical structure of the one or more physical structures in the physical space (see Tichenor, at least at col 3, lines 22-35, col 5, lines 3-41, col 6, lines 7-32, col 8, line 64 – col 9, line 9, col 10, lines 25-35, col 17, limes 26-57, and other related text).
Specifically, Crocker teaches, at least at [0048], that:
The 2D media content displayed on virtual screen 412 depicts a scene that includes a cartoon character 414, who is about to walk off screen. Trigger generation engine 232 detects a corresponding trigger based on the point in time when the cartoon character 414 walks off screen via any of the techniques described herein. In response to detecting the trigger, trigger generation engine 232 generates a 3D virtual object of the cartoon character 416 that appears in the 3D virtual or augmented environment at the point where the cartoon character 414 is no longer viewable in the 2D media content. The 3D virtual cartoon character 416 may walk into the VR/AR environment 400, look at and/or speak to the user 410, and then consequently walk back into the scene displayed in the 2D media content. The 2D media content then resumes playback.
And Tichenor teaches, at least at col 3, lines 6-35, and at col 4, line 59 – col 5, line 41 that:
(col 3, lines 6-35) For example, augments can represent people, places, and things in an artificial reality environment and can respond to a context such as a current display mode, date or time of day, a type of surface the augment is on, a relationship to other augments, etc. A controller in the artificial reality system, sometimes referred to as a “shell,” can control how artificial reality environment information is surfaced to users, what interactions can be performed, and what interactions are provided to applications. Augments can live on “surfaces” with context properties and layouts that cause the augments to be presented or act in different ways. Augments and other objects (real or virtual) can also interact with each other, where these interactions can be mediated by the shell and are controlled by rules in the augments evaluated based on contextual information from the shell.
An augment can be created by requesting the augment from the artificial reality system shell, where the request supplies a manifest specifying initial properties of the augment. The manifest can specify parameters such as an augment title, a type for the augment, display properties (size, orientation, location, eligible location type, etc.) for the augment in different display modes or contexts, context factors the augment needs to be informed of to enable display modes or invoke logic, etc. The artificial reality system can supply the augment as a volume, with the properties specified in the manifest, for the requestor to place in the artificial reality environment and write presentation data into. Additional details on creating augments are provided below in relation to FIGS. 6A and 6B.
(col 4, line 59 – col 5, line 41) Augments may be located in an artificial reality environment by being attached to a surface. A “surface” can be a point, 2D area, or 3D volume to which one or more augments can be attached. Surfaces can be world-locked or positioned relative to a user or other object. Surfaces can be defined by shape, position, and in some cases, orientation. In some implementations, surfaces can have specified types such as a point, a wall (e.g., a vertical 2D area), a floor or counter (e.g., a horizontal 2D area), a face, a volume, etc. Surfaces can be created in various contexts, such as synthetic surfaces, semantic surfaces, or geometric surfaces.
Synthetic surfaces can be generated without using object recognition or room mapping. Examples of synthetic surfaces include a bubble (e.g., a body-locked surface positioned relative to the user as the user moves in the artificial reality environment, regardless of real-world objects); a surface attached to a device (e.g., the artificial reality system may include controllers, an external processing element, etc. that periodically update their position to the artificial reality system, allowing surfaces to be placed relative to the device); a floating surface (e.g., a world-locked surface with a location specified in relation to the position of the artificial reality system, but adjusted to appear fixed as movements of the artificial reality system are detected, thus not requiring understanding of the physical world, other than artificial reality system movement, to be positioned).
Semantic surfaces can be positioned based on recognized (real or virtual) objects, such as faces, hands, chairs, refrigerators, tables, etc. Semantic surfaces can be world-locked, adjusting their display in a field of view to be displayed with a constant relative position to the recognized objects. Semantic surfaces can be molded to fit the recognized object or can have other surface shapes positioned relative to the recognized object.
Geometric surfaces can map to structures in the world, such as portions of a wall or floor or can specify a single point in space. While in some instances geometric surfaces can be a type of semantic surface, in other cases, geometric surfaces can exist independent of ongoing object recognition as they are less likely to be repositioned. For example, portions of a wall can be mapped using a simultaneous localization and mapping (“SLAM”) system. Such surfaces can then be used by the same or other artificial reality system systems by determining a position of the artificial reality system in the map, without having to actively determine other object locations. Examples of geometric surfaces can include points, 2D areas (e.g., portions of floors, counters, walls, doors, windows, etc.), or volumes relative to structures (e.g., cuboids, spheres, etc. positioned relative to the floor, a wall, the inside of a room, etc.)
Therefore, the combination of Crocker in view of Kavallierou and Tichenor reasonably meets the limitations as claimed, and the rejection of record is maintained as set forth below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-5, 12, 15-16 and 56-57 are rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of Kavallierou (of record) and Tichenor (of record).
Regarding claims 1 and 12, Crocker discloses a method comprising:
mapping, at an extended reality device, a virtual space to a physical space (see Crocker, at least at [0025], [0028], and other related text);
receiving, at an extended reality device (i.e., 2D/3D object store 244, see Crocker, at least at [0024]-[0026], [0034], [0064], and other related text), pre-generated extended reality content (i.e., 2D and 3D objects, see Crocker, at least at [0023]-[0025], [0064], and other related text);
identifying that a content item has started playback at a display (see Crocker, at least at [0023]-[0025], [0064], and other related text);
identifying, based at least in part on the mapping, one or more physical structures other than the physical display in the physical space (i.e., the floor, see Crocker, at least at [0028], and other related text);
generating, based at least in part on a character identified in the content item, an extended reality avatar (i.e., object 416, see Crocker, at least at [0048], Fig. 4, and other related text);
placing, based at least in part on the mapping, the extended reality avatar in a virtual space location (see Crocker, at least at [0048], Fig. 4, and other related text);
performing one or more adaptation operations on the extended reality content, wherein the one or more adaption operations adapt the extended reality content to the one or more physical structures (i.e., virtual object positioned and anchored to the floor, see Crocker, at least at [0028], and other related text); and
generating, for concurrent output:
the content item and the extended reality avatar (see Crocker, at least at [0023]-[0025], [0064], [0069], [0074], and other related text).
Crocker does not specifically disclose mapping, via a spatial mapping sensor of an extended reality device, the virtual space to the physical space;
identifying, via the spatial mapping sensor of the extended reality device, a physical display in the physical space;
identifying that the content item has started playback at the physical display;
placing, based at least in part on the mapping, the extended reality avatar in a virtual space location that corresponds to a physical structure of the one or more physical structures in the physical space;
accessing a pre-set setting selecting a type of adaption operation to apply to extended reality content;
performing, based at least in part on the pre-set setting, the one or more adaptation operations on the extended reality avatar, wherein the one or more adaption operations adapt the extended reality avatar to the one or more physical structures; and
and generating for concurrent output:
the extended reality content at the extended reality device; and
the content item at the physical display.
In an analogous art relating to a system for providing video content, Kavallierou discloses identifying, via a spatial mapping sensor of an extended reality device (i.e., tracking technology of the AR head-mounted display, see Kavallierou, at least at [0052]-[0053], and other related text), a physical display in the physical space (i.e., the television/television screen, see Kavallierou, at least at [0019], [0028]-[0031], [0052]-[0053], [0082], and other related text);
identifying that a content item has started playback at the physical display (see Kavallierou, at least at [0023]-[0025], [0036], [0041], and other related text);
performing one or more adaptation operations on extended reality content (see Kavallierou, at least at [0047]-[0049], and other related text); and
generating for concurrent output (see Kavallierou, at least at [0019], [0025], [0028]-[0031], [0052]-[0053], [0082], and other related text):
the extended reality content at the extended reality device (see Kavallierou, at least at [0019], [0025], [0028]-[0031], [0047]-[0048], [0052]-[0053], [0082], and other related text); and
the content item at the physical display (see Kavallierou, at least at [0019], [0025], [0028]-[0031], [0052]-[0053], [0082], and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker to include the limitations as taught by Kavallierou for the advantage of providing a more robust system for user enjoyment.
Crocker in view of Kavallierou does not specifically disclose mapping, via a spatial mapping sensor of an extended reality device, the virtual space to the physical space;
identifying, based at least in part on the mapping, one or more physical structures other than the physical display in the physical space;
placing, based at least in part on the mapping, the extended reality avatar in a virtual space location that corresponds to a physical structure of the one or more physical structures in the physical space;
accessing a pre-set setting selecting a type of adaption operation to apply to extended reality content; and
performing, based at least in part on the pre-set setting, one or more adaptation operations on the extended reality content, wherein the one or more adaption operations adapt the extended reality content to the one or more physical structures.
In an analogous art relating to a system for augmenting reality, Tichenor discloses mapping, via a spatial mapping sensor of an extended reality device, the virtual space to a physical space (see Tichenor, at least at col 3, lines 22-35, col 5, lines 3-41, col 6, lines 7-32, col 8, line 64 – col 9, line 9, col 10, lines 25-35, and other related text);
identifying, based at least in part on the mapping, one or more physical structures other than the physical display in the physical space (see Tichenor, at least at col 3, lines 22-35, col 5, lines 3-41, col 6, lines 7-32, col 8, line 64 – col 9, line 9, col 10, lines 25-35, col 17, limes 26-57, and other related text);
placing, based at least in part on mapping, an extended reality avatar in a virtual space location that corresponds to a physical structure of the one or more physical structures in the physical space (see Tichenor, at least at col 3, lines 22-35, col 5, lines 3-41, col 6, lines 7-32, col 8, line 64 – col 9, line 9, col 10, lines 25-35, col 17, limes 26-57, and other related text);
accessing a pre-set setting selecting a type of adaption operation to apply to extended reality content (see Tichenor, at least at col 24, lines 55-60 and other related text); and
performing, based at least in part on the pre-set setting, one or more adaptation operations on the extended reality avatar (see Tichenor, at least at col 3, lines 22-35, col 4, lines 59-67, col 5, lines 3-41, col 6, lines 7-32, col 8, line 64 – col 9, line 9, col 10, lines 25-35, col 17, limes 26-57, and other related text), wherein the one or more adaption operations adapt the avatar to the one or more physical structures (see Tichenor, at least at col 25, line 3 – col 26, line 67, and other related text); and
generating, for concurrent output:
the extended reality content and the extended reality avatar at the extended reality device (see Tichenor, at least at Fig. 10, and related text).
It would have been obvious to a person having ordinary skill in the art before the effective date of the invention to modify the system of the system of Crocker in view of Kavallierou to include the limitations as taught by Tichenor for the advantage of providing more diverse presentation options for content.
Regarding claims 4 and 15, Crocker in view of Kavallierou and Tichenor discloses wherein the display is a virtual display (see Crocker, at least at [0023]-[0025], [0064], and other related text), the method further comprising:
generating, at the extended reality device, a virtual display for outputting the content item; and playing back the content item at the virtual display (see Crocker, at least at [0023]-[0025], [0064], [0069], [0074], and other related text).
Regarding claims 5 and 16, Crocker in view of Kavallierou and Tichenor discloses wherein the pre-generated extended reality content further comprises metadata indicating a relative location, movement data and/or timing data for an extended reality object, animation and/or scene (see Crocker, at least at [0021], [0036], and other related text).
Regarding claims 56-57, Crocker in view of Kavallierou and Tichenor discloses wherein the one or more adaption operations comprise at least one of a scaling operation and a tiling operation (i.e., changes in properties relating so size, see Tichenor, col 4, lines 25-30, col 6, lines 25-39, col 15, lines 40-57, col 16, lines 28-37, col 17, lines 27-57, col 18, lines 2-61, and other related text).
Claims 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of Kavallierou (of record) and Tichenor (previously cited), as applied to claims 1 and 12 above, and further in view of Yeung (of record).
Regarding claims 2 and 13, Crocker in view of Kavallierou and Tichenor discloses identifying a time stamp of the content item (see Crocker, at least at [0036]-[0039], and other related text), and wherein generating the extended reality content for output at the extended reality device further comprises generating the extended reality content for output at the time stamp (see Crocker, at least at [0036]-[0039], and other related text).
In an analogous art relating to a system for providing video content, Yeung discloses identifying a time stamp of a content item (see Yeung, at least at [0135]-[0136], and other related text);
identifying a corresponding time stamp of extended reality content (see Yeung, at least at [0135]-[0136], and other related text); and
generating extended reality content for output at the time stamp (see Yeung, at least at [0135]-[0136], and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker in view of Kavallierou and Tichenor to include the limitations as taught by Yeung for the advantage of providing a more robust system to efficiently provide content to a user.
Claims 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of Kavallierou (of record) and Tichenor (previously cited), as applied to claims 1 above, and further in view of Jensen (of record).
Regarding claims 6 and 17, Crocker in view of Kavallierou and Tichenor does not specifically disclose receiving, at a physical computing device, a request to access a preview of the pre-generated extended reality content;
generating, for output at the physical computing device, a two-dimensional representation of at least a portion of the virtual reality content; receiving, at the physical computing device, input associated with navigating the preview; and generating for output, at the physical computing device and based on the input, an updated view of the virtual reality content.
In an analogous art relating to a system for providing video content, Jensen discloses receiving, at a physical computing device, a request to access a preview of pre-generated extended reality content (see Jensen, at least at [0027], [0041], and other related text);
generating, for output at the physical computing device, a two-dimensional representation of at least a portion of the virtual reality content (see Jensen, at least at [0027], [0041], and other related text);
receiving, at the physical computing device, input associated with navigating the preview (i.e., tracking input that must be included, see Jensen, at least at [0027], [0041], and other related text); and
generating for output, at the physical computing device and based on the input, an updated view of the virtual reality content (i.e., tracked movement, see Jensen, at least at [0027], [0041], and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker in view of Kavallierou to include the limitations as taught by Jensen for the advantage of providing a more robust system to efficiently provide content to a user.
Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of Kavallierou (of record) and Tichenor (previously cited), as applied to claims 1 and 12 above, and further in view of Shpigelman (of record).
Regarding claims 7 and 18, Crocker in view of Kavallierou and Tichenor does not specifically disclose receiving user input to pause the content item;
pausing playback of the content item; and
generating, for output at the extended reality device, the extended reality avatar for display while the content item is paused.
In an analogous art relating to a system for providing content to a user, Shpigelman discloses receiving user input to pause a content item (see Shpigelman, at least at [0008]-[0009], [0029], [0039], [0048], and other related text);
pausing playback of the content item (see Shpigelman, at least at [0009], [0029], [0039], [0048], and other related text); and
generating, for output at the extended reality device, an extended reality object for display while the content item is paused (see Shpigelman, at least at [0009], [0029], [0039], [0048], and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker in view of Kavallierou and Tichenor to include the limitations as taught by Shpigelman for the advantage of providing a more robust system to efficiently provide content to a user.
Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of in view of Kavallierou, Tichenor (previously cited) and Shpigelman (of record), as applied to claims 7 above, and further in view of Demirchian (of record).
Regarding claims 8 and 19, Crocker in view of Kavallierou, Tichenor and Shpigelman discloses integrating a virtual assistant with the virtual reality object (see Crocker, at least at [0048], and other related text), but does not specifically disclose
receiving a query via the integrated virtual assistant;
generating a response to the query; and
concurrently outputting the response to the query and animating the virtual reality object based on the response to the query.
In an analogous art relating to a system for providing content to a user, Demirchian discloses integrating a virtual assistant with an extended reality avatar (see Demirchian, at least at [0072], and other related text);
receiving a query via the integrated virtual assistant (see Demirchian, at least at [0060], [0080], and other related text);
generating a response to the query (see Demirchian, at least at [0060], [0080], and other related text); and
concurrently outputting the response to the query and animating the virtual reality avatar based on the response to the query (see Demirchian, at least at [0059]-[0060], [0080], and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker in view of Kavallierou, Tichenor and Shpigelman to include the limitations as taught by Demirchian for the advantage of providing a more robust system to efficiently provide content to a user.
Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of Kavallierou (of record) and Tichenor (previously cited), as applied to claims 1 and 12 above, and further in view of Ando (of record).
Regarding claims 9 and 20, Crocker in view of Kavallierou and Tichenor does not specifically disclose identifying, in the virtual space, a volume around the display;
identifying a path of one or more extended reality objects in the extended reality content;
for each identified path, determining whether the path will intersect with the volume, and
for each path that intersects with the volume, amending the path so that it does not intersect with the volume.
In an analogous art relating to a system for providing an experience to a user, Ando discloses identifying, in the virtual space, a volume around an object (see Ando, at least at col 1, lines 57-62, col 3, lines 5-32, col 9, line 49 – col 11, line 25, and other related text);
identifying a path of one or more extended reality objects in the extended reality content (see Ando, at least at col 1, lines 57-62, col 3, lines 5-32, col 9, line 49 – col 11, line 25, and other related text);
for each identified path, determining whether the path will intersect with the volume (see Ando, at least at col 1, lines 57-62, col 3, lines 5-32, col 9, line 49 – col 11, line 25, and other related text), and
for each path that intersects with the volume, amending the path so that it does not intersect with the volume (see Ando, at least at col 1, lines 57-62, col 3, lines 5-32, col 9, line 49 – col 11, line 25, and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker in view of Kavallierou and Tichenor to include the limitations as taught by Ando for the advantage of providing a more pleasing and realistic experience for a user.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of in view of Kavallierou (of record) and Tichenor (previously cited) and Yeung (of record), as applied to claims 2 above, and further in view of Leeper (of record).
Regarding claim 10, Crocker in view of in view of Kavallierou and Yeung does not specifically disclose wherein the extended reality device is a first extended reality device, the method further comprising:
receiving, at a second extended reality device, the pre-generated extended reality content for display with the content item;
generating, for output at the second extended reality device, the extended reality content at the corresponding time stamp;
identifying a user interaction with an extended reality object at the first extended reality device; and generating, for output at the first and second extended reality devices and based on the user interaction, the extended reality object.
In an analogous art relating to a system for providing an experience to a user, Leeper discloses an extended reality device is a first extended reality device, the method further comprising:
receiving, at a second extended reality device, extended reality content for display (see Leeper, at least at [0002], [0027]-[0029], [0031]-[0037], [0042]-[0043], [0046], and other related text);
generating, for output at the second extended reality device, the extended reality content at a corresponding time stamp (see Leeper, at least at [0002], [0027]-[0029], [0031]-[0037], [0042]-[0043], and other related text);
identifying a user interaction with an extended reality object at the first extended reality device (see Leeper, at least at [0002], [0027]-[0029], [0031]-[0037], [0042]-[0043], and other related text); and
generating, for output at the first and second extended reality devices and based on the user interaction, the extended reality object (see Leeper, at least at [0002], [0027]-[0029], [0031]-[0037], [0042]-[0043], and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker in view of in view of Kavallierou and Yeung to include the limitations as taught by Leeper for the advantage of providing a more pleasing and realistic experience for a user.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Crocker (of record) in view of Kavallierou (of record) and Tichenor (previously cited), as applied to claims 1 above, and further in view of Payne (of record).
Regarding claim 11, Crocker in view of Kavallierou in view of Tichenor does not specifically disclose generating, for output, a content item guidance application, wherein the content item guidance application comprises:
a selectable asset region corresponding to a content item; and an icon, associated with the asset region, wherein the icon indicates that extended reality content is available for the content item.
In an analogous art relating to a system for providing content to a user, Payne discloses generating, for output, a content item guidance application (see Payne, at least at [0195], Figs. 6, and other related text), wherein the content item guidance application comprises:
a selectable asset region corresponding to a content item (see Payne, at least at [0195], Figs. 6, and other related text); and
an icon, associated with the asset region, wherein the icon indicates that extended reality content is available for the content item (i.e., icon 610, see Payne, at least at [0195], Figs. 6, and other related text).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system of Crocker in view of Kavallierou and Tichenor to include the limitations as taught by Payne for the advantage of more robustly providing information to a user.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHENEA DAVIS whose telephone number is (571)272-9524 and whose email address is CHENEA.SMITH@USPTO.GOV. The examiner can normally be reached M-F: 8:00 am - 4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached at 571-272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHENEA DAVIS/Primary Examiner, Art Unit 2421