Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/25/2024, 04/11/2024, 09/17/2024, 10/04/2024, 01/21/2025, 07/18/2025 are being considered by the examiner.
Drawings
The drawings filed on: 01/25/2024 are accepted.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 31, 33, 34, 36, 38-41, 43, 44, 46, and 48-50 of the instant application (hereinafter ‘998) are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11256863 (hereinafter ‘863). Although the claims at issue are not identical, they are not patentably distinct from each other because :
With regards to claim 31 of ‘998, claim 13 of ‘863 teaches the limitations of claim 31 of ‘998 , since claim 31 of ‘998 is broader than claim 13 of ‘863.
With regards to claim 33 of ‘998, claim 15 of ‘863 teaches the limitations of claim 33 of ‘998, since claim 33 of ‘998 is broader than claim 15 of ‘863.
With regards to claim 34 of ‘998, claim 15 of ‘863 teaches the limitations of claim 34 of ‘998, since claim 34 of ‘998 is broader than claim 15 of ‘863.
With regards to claim 36 of ‘998, claim 18 of ‘863 teaches the limitations of claim 36 of ‘998, since claim 36 of ‘998 is broader than claim 18 of ‘863.
With regards to claim 38 of ‘998, claim 22 of ‘863 teaches the limitations of claim 38 of ‘998, since claim 38 of ‘998 is broader than claim 22 of ‘863.
With regards to claim 39 of ‘998, claim 13 of ‘863 teaches the limitations of claim 39 of ‘998, since claim 39 of ‘998 is broader than claim 13 of ‘863.
With regards to claim 40 of ‘998, claim 20 of ‘863 teaches the limitations of claim 40 of ‘998, since claim 40 of ‘998 is broader than claim 20 of ‘863.
With regards to claim 41 of ‘998, claim 13 of ‘863 teaches the limitations of claim 41 of ‘998 , since claim 41 of ‘998 is broader than claim 13 of ‘863.
With regards to claim 43 of ‘998, claim 15 of ‘863 teaches the limitations of claim 43 of ‘998, since claim 43 of ‘998 is broader than claim 15 of ‘863.
With regards to claim 44 of ‘998, claim 15 of ‘863 teaches the limitations of claim 44 of ‘998, since claim 44 of ‘998 is broader than claim 15 of ‘863.
With regards to claim 46 of ‘998, claim 18 of ‘863 teaches the limitations of claim 46 of ‘998, since claim 46 of ‘998 is broader than claim 18 of ‘863.
With regards to claim 48 of ‘998, claim 22 of ‘863 teaches the limitations of claim 48 of ‘998, since claim 48 of ‘998 is broader than claim 22 of ‘863.
With regards to claim 49 of ‘998, claim 13 of ‘863 teaches the limitations of claim 49 of ‘998, since claim 49 of ‘998 is broader than claim 13 of ‘863.
With regards to claim 50 of ‘998, claim 20 of ‘863 teaches the limitations of claim 50 of ‘998, since claim 50 of ‘998 is broader than claim 20 of ‘863.
Claims 32 , 37, 42 and 47 of the instant application are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of 11256863 (hereinafter ‘863) in view of Schriber et al (US Application: US 20190107927, published Apr. 11, 2019, filed: Oct. 9, 2018).
With regards to claim 32 of ‘998, claim 13 of ‘863 teaches the limitations of claim 32 of ‘998 except for ‘… when the document comprises an intervening portion of text between the first portion of text and the second portion of text that is related to a different character’.
Yet Schriber et al teaches ‘… when the document comprises an intervening portion of text between the first portion of text and the second portion of text that is related to a different character’ (Fig. 3H: a bubble for Mary can intervene between the occurrences of two bubbles from Steve from the time line of data parsed from the script).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified claim 13 of ‘863’s ability to recognize text from a script document , such that the recognized text would have further included recognition of intervening portion for a different character as taught by Schriber et al. The combination would have implemented a consistent way to generate storyboards for visualizing a story while taking into account timing of each character’s action(s) (Schriber et al, paragraph 0003)
With regards to claim 37 of ‘998, claim 13 of ‘863 teaches the limitations of claim 37 of ‘998, except: the first temporal mapping and the second temporal mapping includes a value indicative of a rate or a degree of the action.
Yet Schriber et al teaches the first temporal mapping and the second temporal mapping includes a value indicative of a rate or a degree of the action (See Fig. 3E, which shows the attribute of dialog action has a length that extends along a time axis, where the length is a value indicative of duration degree of dialog-action for the character/object).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified claim 13 of ‘863’s ability to recognize text from a script document , such that the recognized text would have further included recognition of degree of action as taught by Schriber et al. The combination would have implemented a consistent way to generate storyboards for visualizing a story while taking into account timing of each character’s action(s) (Schriber et al, paragraph 0003)
With regards to claim 42 of ‘998, it is rejected under similar rationale as claim 32 of ‘998.
With regards to claim 47 of ‘998, it is rejected under similar rationale as claim 37 of ‘998.
Claims 35 and 45 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of 11256863 (hereinafter ‘863) in view of Price et al (US Patent: 9106812, issued: Aug. 11, 2015, filed: Dec. 29, 2011).
With regards to claim 35 of ‘998, claim 15 of ‘863 teaches the limitations of claim 35 of ‘998, except: … the format type … is in full uppercase format.
Yet Price et al teaches the format type … is in full uppercase format (column 2, lines 40-51, Fig. 1: an indentation is identified for the first instance of character name, and as shown the format type includes the character name being associated in full uppercase).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified claim 13 of ‘863’s ability to parse and recognize character/object content present within a script document, such that the recognition for character/object is further modified to include recognition of indented and capitalized text associated with a particular character/object, as taught by Price et al. The combination would have allowed generation of effective storyboards that help to bring life to a screenplay (Price et al, column 1, lines 29-32).
With regards to claim 45 of ‘998, it is rejected under similar rationale as claim 35 of ‘998.
Claim 31 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of 11604827 (hereinafter ‘827) in view of Schriber et al (US Application: US 20190107927, published Apr. 11, 2019, filed: Oct. 9, 2018).
With regards to claim 31 of ‘998, claim 1 of ‘827 similarly teaches the limitations of claim 31 of ‘998 except: … generating a time based structure comprising an object … the object is associated with a character; determining that a first portion of text in a document is associated with the character; .. modifying the time based content structure to add a first temporal mapping to the object … ; determining that a second portion of text in a document is associated with the character; … modifying the time based content structure to add a second temporal mapping to the object …
Yet Schriber et al teaches … generating a time based structure comprising an object … the object is associated with a character; determining that a first portion of text in a document is associated with the character; .. modifying the time based content structure to add a first temporal mapping to the object … ; determining that a second portion of text in a document is associated with the character; … modifying the time based content structure to add a second temporal mapping to the object …(Fig. 3E, paragraphs 0039, 0052: a time based collection of metadata is generated /structured where each vertical column of data is associated with a specific character/object and text in a script document is traversed for one or more entities, such as one or more objects and or actions associated with the character. One or more temporal mappings are updated upon encountering each of the one or more entities such as actions ).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified claim 1 of ‘827’s ability to reference temporal mapping data from a content structure, such that the content structure could have initially undergone a process of generation/update prior to generation, as taught by Schriber et al. The combination would have implemented a consistent way to generate storyboards for visualizing a story while taking into account timing of each character’s action(s) (Schriber et al, paragraph 0003)
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 31-50 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With regards to claim 31, the claim recites ‘generate for output a content segment … and applying the first and second attributes to the object during the first and second time periods’. It is unclear/indefinite whether the object that was subject to application of the first and second attributes, is or is-not part of the generated image , video and/or audio content, as it appears to be recited independently of the content segment. For purposes of examination, the examiner will assume the object is part of one of the generated image, video and/or audio content within the generated content segment. The examiner suggests the applicant consider directly coupling the object to be part of the content segment having the image , video and/or audio content if the object is intended by the applicant to be a part of the generated content segment, and otherwise, if the examiner’s assumption is not as the applicant intended, then the examiner suggests the applicant consider clarifying where the application of first and second attributes to the object are supposed to be applied (such as before or after generation of the content segment).
With regards to claims 32-40, since they depend upon claim 31 and do not resolve the deficiencies of claim 31, then they are also rejected under similar rationale as clam 31 for depending upon the limitations at issue for claim 31.
With regards to claim 41, it is rejected under similar rationale as claim 31.
With regards to claim 42-50, since they depend upon claim 41 and do not resolve the deficiencies of claim 41, then they are also rejected under similar rationale as clam 41 for depending upon the limitations at issue for claim 41.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 31, 32, 36-42, and 46-50 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Schriber et al (US Application: US 20190107927, published Apr. 11, 2019, filed: Oct. 9, 2018).
With regards to claim 31. Schriber et al teaches a method (Fig. 12: a method is implemented using computer hardware/circuitry such as a processor and memory ) comprising:
generating a time-based content structure comprising an object in an attribute table, wherein the object is associated with a character (Fig. 3E, paragraphs 0039, 0052: a time based collection of metadata is generated /structured where each vertical column of data is associated with a specific character/object);
determining that a first portion of text in a document is associated with the character (paragraphs 0039 and 0052: text in a script document is traversed for one or more entities, such as objects and or actions associated with the character);
modifying the time-based content structure to add a first temporal mapping to the object that corresponds to the character, wherein the first temporal mapping indicates that a first attribute applies to the object for a duration of a first time period, wherein the first attribute is based on the first portion of text (Fig. 3E, paragraphs 0039 and 0052: an attribute (such as an action or speech/dialog)) associated with the character’s (the object’s) data is updated and can be represented as a bubble (see first bubble visualized earlier in time of the object’s timeline (302A));
determining that a second portion of text in the document that is separate from the first portion of text is associated with the character (Fig. 3E, paragraphs 0039 and 0052: a second portion of dialog/text can be determined when parsing);
modifying the time-based content structure, to add a second temporal mapping to the object that corresponds to the character, wherein the second temporal mapping indicates that a second attribute applies to the object for a duration of a second time period, wherein the second attribute is based on the second portion of text (Fig. 3E, paragraphs 0039 and 0052: this second dialog can be represented as a second bubble (302B) associated with the character’s/object’s timeline); and
generating for output a content segment based on the time-based content structure, wherein the content segment includes one or more of image content, video content, and audio content based on referencing the first and second temporal mappings and applying the first and second attributes to the object during the first and second time periods (paragraph 0083 and 0085: using the metadata , 2D or 3D output including image and or audio can be rendered).
With regards to claim 32. (New) The method of claim 31, Schriber et al teaches wherein the second portion of text is separate from the first portion of text when the document comprises an intervening portion of text between the first portion of text and the second portion of text that is related to a different character (Fig. 3H: a bubble for Mary can intervene between the occurrences of two bubbles from Steve from the time line of data parsed from the script).
With regards to claim 36. (New) The method of claim 31, Schriber et al teaches wherein the first temporal mapping is added to the time- based content structure at a first point in the document representing the first time period and the second temporal mapping is added to the time-based content structure at a second point in the document representing the second time period later than the first time period, and wherein one or more of an action, a state, an absolute location, a relative location, an absolute motion, and a relative motion is determined based on the first temporal mapping and the second temporal mapping, as similarly explained in the rejection of claim 1 (Schriber et al was explained to have a first dialog-action represented as a first-bubble , occurs earlier in a time line than a second dialog-action represented as a second/subsequent bubble for the character/object (see Fig. 3E)), and is rejected under similar rationale.
With regards to claim 37. (New) The method of claim 31, Schriber et al teaches wherein the first attribute or the second attribute is an action, and wherein the first temporal mapping and the second temporal mapping includes a value indicative of a rate or a degree of the action (See Fig. 3E, which shows the attribute of dialog action has a length that extends along a time axis, where the length is a value indicative of duration degree of dialog-action for the character/object).
With regards to claim 38. (New) The method of claim 31, Schriber et al teaches further comprising: extracting character data from a portion of text associated with the character; determining respective attribute table entries based on the character data; and matching the respective attribute table entries to the first temporal mapping of the object that corresponds to the character (Fig. 3E: each occurrence of a dialog action in a script is matched to the character/object for which the dialog action parsed from the script and is rendered along the character’s/object’s timeline data structure such that one or more subsequent actions are present on a same common timeline for the character/object ).
With regards to claim 39. (New) The method of claim 38, Schriber et al teaches wherein the attribute table, based on the extracted character data, includes at least one of: an object data structure, a descriptive structure, an action structure and an audio structure ( as explained in the rejection of claim 38, an action structure relates the action bubbles having their specific duration lengths and rendered in Fig. 3E), and is rejected under similar rationale .
With regards to claim 40. (New) The method of claim 31, Schriber et al teaches wherein the content segment is generated by one or more of combining, replacing, mixing, and matching one or more of the object, an action, a setting, an effect, image content, video content, and audio content from one or more previously stored content structures to create a new content structure, which is then rendered as a new content segment (as similarly explained in the rejection of claim 31, a content segment is generated in 2D and/or 3D form that includes time based actions corresponding to a character/object by referencing the meta data gleaned a script/document’s text), and is rejected under similar rationale.
With regards to claim 41. Schriber et al teaches a system comprising: a memory; a control circuitry configured to: generate a time-based content structure comprising an object in an attribute table, wherein the object is associated with a character, and wherein the time-based content structure is stored in the memory; determine that a first portion of text in a document is associated with the character; modify the time-based content structure to add a first temporal mapping to the object that corresponds to the character, wherein the first temporal mapping indicates that a first attribute applies to the object for a duration of a first time period, wherein the first attribute is based on the first portion of text; determine that a second portion of text in the document that is separate from the first portion of text is associated with the character; modify the time-based content structure, to add a second temporal mapping to the object that corresponds to the character, wherein the second temporal mapping indicates that a second attribute applies to the object for a duration of a second time period, wherein the second attribute is based on the second portion of text; and an input/output (I/O) circuitry configured to: generate for output a content segment based on the time-based content structure, wherein the content segment includes one or more of image content, video content, and audio content based on referencing the first and second temporal mappings and applying the first and second attributes to the object during the first and second time periods, as similarly explained in the rejection of claim 31, and is rejected under similar rationale.
With regards to claim 42. (New) The system of claim 41, Schriber et al teaches wherein the second portion of text is separate from the first portion of text when the document comprises an intervening portion of text between the first portion of text and the second portion of text that is related to a different character, as similarly explained in the rejection of claim 32, and is rejected under similar rationale.
With regards to claim 46. (New) The system of claim 41, Schriber et al teaches wherein the first temporal mapping is added to the time- based content structure at a first point in the document representing the first time period, and the second temporal mapping is added to the time-based content structure at a second point in the document representing the second time period later than the first time period, and wherein the control circuitry is further configured to determine one or more of an action, a state, an absolute location, a relative location, an absolute motion, and a relative motion based on the first temporal mapping and the second temporal mapping, as similarly explained in the rejection of claim 36, and is rejected under similar rationale.
With regards to claim 47. (New) The system of claim 41, Schriber et al teaches wherein the first attribute or the second attribute is an action, and wherein the first temporal mapping and the second temporal mapping includes a value indicative of a rate or a degree of the action, as similarly explained in the rejection of claim 37, and is rejected under similar rationale.
With regards to claim 48. (New) The system of claim 41, Schriber et al teaches wherein the control circuitry is further configured to: extract character data from a portion of text associated with the character; determine respective attribute table entries based on the character data; and match the respective attribute table entries to the first temporal mapping of the object that corresponds to the character, as similarly explained in the rejection of claim 38, and is rejected under similar rationale.
With regards to claim 49. (New) The system of claim 48, Schriber et al teaches wherein the attribute table, based on the extracted character data, includes at least one of: an object data structure, a descriptive structure, an action structure and an audio structure, as similarly explained in the rejection of claim 39, and is rejected under similar rationale.
With regards to claim 50. (New) The system of claim 41, Schriber et al teaches wherein the (I/O) circuitry is configured to generate the content segment by one or more of combining, replacing, mixing, and matching one or more of the object, an action, a setting, an effect, image content, video content, and audio content from one or more previously stored content structures to create a new content structure, which is then rendered as a new content segment, as similarly explained in the rejection of claim 40, and is rejected under similar rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 33-35 and 43-45 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schriber et al (US Application: US 20190107927, published Apr. 11, 2019, filed: Oct. 9, 2018) in view of Price et al (US Patent: 9106812, issued: Aug. 11, 2015, filed: Dec. 29, 2011).
With regards to claim 33. (New) The method of claim 31, Schriber et al teaches further comprising: determining that the first and second portions of text … as similarly explained in the rejection of claim 31, and is rejected under similar rationale.
However Schriber et al does not expressly teach … the first and second portions of text are associated with the character based on a format type of the first and second portions of text.
Yet Price et al teaches … the first and second portions of text are associated with the character based on a format type of the first and second portions of text (column 2, lines 40-51, Fig. 1: an indentation is identified for the first instance of character name, and as shown the format type includes the character name being associated in full uppercase).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Schriber et al’s ability to parse and recognize character/object content present within a script document, such that the recognition for character/object is further modified to include recognition of indented and capitalized text associated with a particular character/object, as taught by Price et al. The combination would have allowed Schriber et al to have generated effective storyboards that help bring life to a screenplay (Price et al, column 1, lines 29-32).
With regards to claim 34. (New) The method of claim 33, the combination of Schriber et al and Price et al teaches wherein the format type comprises a uniform indentation of the portion of text (as similarly explained in the rejection of claim 33, formatting includes indentation), and is rejected under similar rationale.
With regards to claim 35. (New) The method of claim 33, the combination of Schriber et al and Price et al teaches wherein the format type comprises each portion of text associated with the character is in full uppercase format (as similarly explained in the rejection of claim 33, format type includes capitalization), and is rejected under similar rationale.
With regards to claim 43. (New) The system of claim 41, the combination of Schriber et al and Price et al teaches wherein the control circuitry is further configured to: determine that the first and second portions of text are associated with the character based on a format type of the first and second portions of text, as similarly explained in the rejection of claim 33, and is rejected under similar rationale.
With regards to claim 44. (New) The system of claim 43, the combination of Schriber et al and Price et al teaches wherein the format type comprises a uniform indentation of the portion of text, as similarly explained in the rejection of claim 34, and is rejected under similar rationale.
With regards to claim 45. (New) The system of claim 43, the combination of Schriber et al and Price et al teaches wherein the format type comprises each portion of text associated with the character is in full uppercase format, as similarly explained in the rejection of claim 35, and is rejected under similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
KIM (US Patent: 8223153): This reference teaches using a parser upon a storyboard and subsequently generating a graphic animation based on the parsing.
Lindley et al (US Application: US 2009/0024963): This reference teaches designing a story board that includes attributes for different characters.
Klappert (US Application: 20110135278): This reference teaches parsing NLP annotations that define content and timing contingencies and displaying /providing interactive content.
Kuspa (US Application: US 2013/0124984): This reference teaches parsing a script and creating a time based structure for correlating characters and their associate dialog and/or media.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILSON W TSUI/Primary Examiner, Art Unit 2172