Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This non-final office action is in response to the Application filed on 11/17/2023.
Claim(s) 1-20 are pending for examination. Claim(s) 1, 10 is/are independent claim(s).
Drawings
New corrected drawings in compliance with 37 CFR 1.121(d) are required in this application because Figs. 7-39 appear to be screenshots or photo copies, which is permitted, however some of the text is not legible and does not meet the requirements in 37 CFR 1.84 Standards for drawings, specifically 37 CFR 1.84(p)(3) which requires numbers, letters, and reference characters must measure at least .32 cm. (1/8 inch) in height.
See 37 CFR 1.84 Standards for drawings.
…
(l) Character of lines, numbers, and letters. All drawings must be made by a process which will give them satisfactory reproduction characteristics. Every line, number, and letter must be durable, clean, black (except for color drawings), sufficiently dense and dark, and uniformly thick and well-defined. The weight of all lines and letters must be heavy enough to permit adequate reproduction. This requirement applies to all lines however fine, to shading, and to lines representing cut surfaces in sectional views. Lines and strokes of different thicknesses may be used in the same drawing where different thicknesses have a different meaning.
…
(p) Numbers, letters, and reference characters.
…
(3) Numbers, letters, and reference characters must measure at least .32 cm. (1/8 inch) in height. They should not be placed in the drawing so as to interfere with its comprehension. Therefore, they should not cross or mingle with the lines. They should not be placed upon hatched or shaded surfaces. When necessary, such as indicating a surface or cross section, a reference character may be underlined and a blank space may be left in the hatching or shading where the character occurs so that it appears distinct.
Applicant is advised to employ the services of a competent patent draftsperson outside the Office, as the U.S. Patent and Trademark Office no longer prepares new drawings. The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 8-10, 13, 14, 18, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abe; Makoto US Pub. No. 20140244155 (Abe).
Claim 1:
Abe teaches:
A computer program product for generating enhanced media in a digital book, the computer program product comprising a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code being configured, when executed by a processor [¶ 0102-107] (memory and control unit), to:
receive a downloaded text file of a publication [¶ 0128] (acquired from a server by using a communication unit, could be “downloaded”);
generate a first user interface, wherein text of the downloaded text file is viewable by a user [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change);
receive a media file related to the text of the downloaded text file [¶ 0123-129] (Fig. 4, scenario data 121, location information link, data 122, and map data 123);
tag a point in the text [¶ 0129-142] (Figs. 5-7, location information link data that links the sentence location to the scene location);
receive an executable instruction embedded in the tagged point in the text, wherein the executable instruction performs a dynamic change in the media file [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text);
display the downloaded text into a digital readable format, wherein the downloaded text incorporates the executable instruction in the tagged point along with the media file in the digital book [¶ 0129-142] (Figs. 5-7, location information link data that links the sentence location to the scene location); and
perform the executable instruction upon a triggering event associated with the tagged point in the text, wherein performing the executable instruction performs a display of the dynamic change in the media file in the digital book format [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Abe teaches all the elements of the claim, however some of these elements may by in different embodiments and use different terminology, for example retrieving from a server in Abe would an obvious variation of the claimed “download”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the different embodiments of Abe and that the difference in terminology is an obvious variant.
The reason, rationale, and motivation for this combination would have been for “convenience in accessing the contents” [Abe: ¶ 0197, 233-234].
Claim 2:
Abe teaches:
The computer program product of claim 1, wherein the media file is a map [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, show a map).
Claim 3:
Abe teaches:
The computer program product of claim 2, wherein the dynamic change in the map includes displaying a change in an appearance of map features in the map [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim 8:
Abe teaches:
The computer program product of claim 1, wherein the computer readable program code is further configured to:
narrate the text (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text);
identify the tagged point in the text during narration; and
display the dynamic change in the media file synchronized with narration of the identified tagged point in the text [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim 9:
Abe teaches:
The computer program product of claim 1, wherein the computer readable program code is further configured to receive a user generated map including map features correlated with a text passage in the text, wherein the executable instruction is associated with one or more of the map features in the user generated map [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text).
Claim 10:
Claim(s) 10 appears to be a reworded but similar to claim 1 and is/are rejected using the same art and the same rationale.
Claim 10 does not recite a “tag” but does recite a “frame”.
Claim 1 recites a “download”, and claim 10 recites a “download portal”.
Able also teaches a “frame” [¶ 0114-120] (Fig. 3, display with text and display with map scene, the map is in a “frame”) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text, the map is in a “frame”).
Abe teaches:
A user interface platform for digital books, comprising:
a first user interface, wherein the first user interface includes:
a digital book download portal [¶ 0128] (acquired from a server by using a communication unit, could be “download portal”), and
a media development interface, wherein an author user generates a media file or modifies a pre-generated media file, that includes content associated with a digital book downloaded through the digital book download portal [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text); and
a second user interface, wherein the second user interface includes a first frame displaying text of the digital book downloaded through the digital book download portal and displays the author user generated media file or modified pre-generated media files [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim 13:
Claim(s) 13 appears to be a reworded but similar to claim 1 and is/are rejected using the same art and the same rationale.
Abe teaches:
The user interface platform of claim 10, further comprising a first function in the first user interface, wherein the first function is configured to receive an executable instruction embedded in the text of the digital book [¶ 0123-129] (Fig. 4, scenario data 121, location information link, data 122, and map data 123) [¶ 0129-142] (Figs. 5-7, location information link data that links the sentence location to the scene location) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text).
Claim 14:
Claim(s) 14 appears to be a reworded but similar to claim 3 and is/are rejected using the same art and the same rationale.
Abe teaches:
The user interface platform of claim 13, wherein the executable instruction embedded in the text of the digital book is configured to perform a dynamic change in the author user generated media file or modified pre-generated media file [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim 16:
Claim(s) 16 appears to be a reworded but similar to claim 8 and is/are rejected using the same art and the same rationale.
Abe teaches:
The user interface platform of claim 13, wherein:
the second user interface includes a narration function configured to audibly narrate the text of the digital book(Figs. 11A-11C, scene location icon progresses as the reading voice reads the text);
the second user interface includes an automated feature that, upon a computer processor detecting a position of the executable instruction embedded in the text of the digital book, the computer processor triggers the executable instruction upon determining that a narration has reached the detected position of the executable instruction embedded in the text of the digital book [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim 18:
Abe teaches:
The user interface platform of claim 10, wherein:
the second user interface includes a second frame demarcated from the first frame;
the first frame is dedicated to displaying the text of the digital book; and
the second frame displays the author user generated media file or modified pre-generated media file [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim 19:
Abe teaches:
The user interface platform of claim 18, wherein the author user generated media file or modified pre-generated media file changes appearance in the second frame in relation to a reader user progressing through the text of the digital book in the first frame [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions) [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim 20:
Abe teaches:
The user interface platform of claim 18, wherein the author user generated media file or modified pre-generated media file remains visible in the second frame as a reader user progresses through multiple pages of the text of the digital book in the first frame [¶ 0137-139, 145-149] (page data for scenario) [¶ 0129-142] (Figs. 5-7, location information link data that links the sentence location to the scene location, if a sentence spans, or crosses over to the next page, and there is not a scene change then the frame would be visible through “multiple pages”).
Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abe; Makoto US Pub. No. 20140244155 (Abe) in view of Boss; Gregory J. et al. US Pub. No. 20170124733 (Boss).
Claim 4:
Abe teaches all the elements as shown above.
Abe does not appear to explicitly disclose “animation of map features”.
However, the disclosure of Boss teaches:
The computer program product of claim 2, wherein the dynamic change in the map includes animation of map features in the map [¶ 0025-28] (map animations).
Boss also teaches: [¶ 0004, 32] (creating a map for a story plot, receiving an indication of a user position in the story plot, determining a set of coordinates on the map for a character in the story plot with respect to the user position, and displaying the map with the character in the story plot represented on the map according to the set of coordinates)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of maps in electronic books in Abe and the method of map animations in Boss, with a reasonable expectation of success.
The motivation for doing so would have been the use of known technique to improve similar devices (methods, or products) in the same way; (See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(D)).
The know technique of map animations in Boss could be applied to the electronic maps in Abe. Boss and Abe are similar devices because each describe electronic books. One of ordinary skill in the art would have recognized that applying the known technique would improve the similar devices and resulted in an improved system, with a reasonable expectation of success, for accuracy and consistency in e-books [Boss: ¶ 0003].
Claim 5:
Abe teaches:
The computer program product of claim 4, wherein the animation correlates to a text passage describing the animation in the text [¶ 0114-120] (Fig. 3, display with text and display with map scene) [¶ 0177-183] (Figs. 11A-11C, scene location icon progresses as the reading voice reads the text) [¶ 0273] (Fig. 25, scene location after the change is not at the location on the map being displayed when changing the scene location, the display device 10 divides the display screen, and displays both of the map including the scene location before the change and the map including the scene location after the change).
Claim(s) 6, 7, 11, 12, 15, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abe; Makoto US Pub. No. 20140244155 (Abe) in view of Cranfill; Elizabeth Caroline Furches et al. US Pub. No. 2012/0311438 (Cranfill).
Claim 6:
Abe teaches all the elements as shown above.
Abe does not appear to explicitly disclose “visual representation is selectable by a reader”.
However, the disclosure of Cranfill teaches:
The computer program product of claim 1, wherein the computer readable program code is further configured to:
generate a visual representation of the tagged point in the text [¶ 0198] (reading position (i.e., the words as they were being spoken) could be visually highlighted to enhance the user's experience in following along and reading the words as they were being spoken, a highlight is a “visual representation”), wherein:
the visual representation is selectable by a reader [¶ 0253] (enable a user to highlight a portion of text or inspire invocation of a map); and
the triggering event is a selection of the visual representation by the reader [¶ 0080-84] (display also includes a link to different portions of the map (e.g., includes a links to different continents within the world map).
Cranfill also teaches: [¶ 0196, 267] (text or other content of a publication could have and display selectable links that provide access to webpages, inline videos or essentially any other type of complementary content, stored either locally on the device or available via the network, enable eBooks to have links to web pages, inline videos, images, music or other audio clips).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of maps in electronic books in Abe and the method of maps in e-books in Cranfill, with a reasonable expectation of success.
The motivation for doing so would have been the use of known technique to improve similar devices (methods, or products) in the same way; (See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(D)).
The know technique of e-book maps in Cranfill could be applied to the electronic maps in Abe. Cranfill and Abe are similar devices because each describe electronic books.
One of ordinary skill in the art would have recognized that applying the known technique would improve the similar devices and resulted in an improved system, with a reasonable expectation of success, for “improving the user experience” [Cranfill: ¶ 0059].
Claim 7:
Abe teaches:
The computer program product of claim 6, wherein the visual representation of the tagged point in the text is one of a stylized font that differs from adjacent font, superscripted or subscripted font, or an icon positioned adjacent the tagged point in the text [¶ 0114, 161, 180-182, 273] (Figs. 3, 10, 11, 20, 25, black characters of the sentences (sentences 341, sentences 351, and sentences 361) displayed in FIGS. 11A to 11C show the read portion and the gray characters thereof show the unread portions, black and grey sentences are “stylized font”).
Claim 11:
Able teaches: [¶ 0128] (acquired from a server by using a communication unit, could be “downloaded”).
Cranfill teaches:
The user interface platform of claim 10, wherein the author user generated media file or modified pre-generated media file is a map of content described in the text of the digital book downloaded through the digital book download portal [¶ 0062, 65-66] (e-book store for downloading content) [¶ 0080-84] (display also includes a link to different portions of the map (e.g., includes a links to different continents within the world map).
Claim 12:
Cranfill teaches:
The user interface platform of claim 10, wherein the author user generated media file or modified pre-generated media file is a video of content described in the text of the digital book downloaded through the digital book download portal [¶ 0104] (visual output may include graphics, text, icons, video, and any combination thereof, collectively termed "graphics") [¶ 0121, 150] ("graphics" includes any object that can be displayed to a user, including without limitation text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations and the like) [¶ 0196, 267] (text or other content of a publication could have and display selectable links that provide access to webpages, inline videos or essentially any other type of complementary content, stored either locally on the device or available via the network, enable eBooks to have links to web pages, inline videos, images, music or other audio clips).
Claim 15:
Claim(s) 15 appears to be a reworded but similar to claim 10 and is/are rejected using the same art and the same rationale.
Cranfill teaches:
The user interface platform of claim 14, wherein:
the first user interface includes a second function configured to generate a visual representation of the executable instruction embedded in the text of the digital book [¶ 0198] (reading position (i.e., the words as they were being spoken) could be visually highlighted to enhance the user's experience in following along and reading the words as they were being spoken, a highlight is a “visual representation”), and
the executable instruction is triggered in response to a reader user interacting with the visual representation of the executable instruction embedded in the text of the digital book [¶ 0253] (enable a user to highlight a portion of text or inspire invocation of a map) [¶ 0080-84] (display also includes a link to different portions of the map (e.g., includes a links to different continents within the world map).
Cranfill also teaches: [¶ 0196, 267] (text or other content of a publication could have and display selectable links that provide access to webpages, inline videos or essentially any other type of complementary content, stored either locally on the device or available via the network, enable eBooks to have links to web pages, inline videos, images, music or other audio clips).
Claim 17:
Cranfill teaches:
The user interface platform of claim 13, wherein:
the executable instruction embedded in the text of the digital book is configured to open a web page displayed in the second user interface; and
the web page shows content related to the text of the digital book [¶ 0196, 267] (text or other content of a publication could have and display selectable links that provide access to webpages, inline videos or essentially any other type of complementary content, stored either locally on the device or available via the network, enable eBooks to have links to web pages, inline videos, images, music or other audio clips).
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Please See PTO-892: Notice of References Cited.
Evidence of the level skill of an ordinary person in the art for Claim 1:
Howell; Eric Dennis et al. US 20230038412 teaches: Figs. 3-4, picture with dialog; [0173] Fig. 14, Once the call-out content 1403 is activated with the geo-enable 1404 action, the call-out content is available for modification based on the geographic location of the user when viewing the digital story. geo-location map 1419.
TOMSON; KYLE US 20150004586 teaches: FIG. 15 is a depiction of a form of level 3 content presentation with inline dynamic map, side by side view.
Havard; Amanda Meredith US 20130104072 teaches: maps for e-book, zoom Figs. 99-101 show an animation; progressive interactive elements such as character profiles, maps, or timelines which have content dependent upon where in the book the content is accessed.
NOWAKOWSKI; MACIEJ SZYMON et al. US 20130268826 teaches: Maps, audio playback subsystem 230 also provides still images (or video, if available) corresponding to the portion of the book being presented in audio format.
Citations to Prior Art
A reference to specific paragraphs, columns, pages, or figures in a cited prior art reference is not limited to preferred embodiments or any specific examples. It is well settled that a prior art reference, in its entirety, must be considered for all that it expressly teaches and fairly suggests to one having ordinary skill in the art. Stated differently, a prior art disclosure reading on a limitation of Applicant's claim cannot be ignored on the ground that other embodiments disclosed were instead cited. Therefore, the Examiner's citation to a specific portion of a single prior art reference is not intended to exclusively dictate, but rather, to demonstrate an exemplary disclosure commensurate with the specific limitations being addressed. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968". In re: Upsher-Smith Labs. v. Pamlab, LLC, 412 F.3d 1319, 1323,75 USPQ2d 1213,1215 (Fed. Cir. 2005); In re Fritch, 972 F.2d 1260, 1264,23 USPQ2d 1780, 1782 (Fed. Cir. 1992); Merck & Co. v. Biocraft Labs., Inc., 874 F.2d 804, 807,10 USPQ2d 1843, 1846 (Fed. Cir. 1989); In re Fracalossi, 681 F.2d 792,794 n.1, 215 USPQ 569, 570 n.1 (CCPA 1982); In re Lamberti, 545 F.2d 747, 750, 192 USPQ 278, 280 (CCPA 1976); In re Bozek, 416 F.2d 1385,1390,163 USPQ 545, 549 (CCPA 1969).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN J SMITH whose telephone number is (571)270-3825. The examiner can normally be reached Monday - Friday 11:00 - 7:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ADAM QUELER can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Benjamin Smith/Primary Examiner, Art Unit 2172 Direct Phone: 571-270-3825
Direct Fax: 571-270-4825
Email: benjamin.smith@uspto.gov