DETAILED ACTION
Introduction
This office action is in response to Applicant’s submission filed on 09/29/2025. Claims 1-20 are pending in the application and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner Recommendation
It is the suggestion of the Examiner, to include the relevant support in Specifications for the amendments to the claims.
Response to Amendment
The responses filed on 09/29/2025 have been correspondingly accepted and considered in this Office Action. Claims 1-20 have been examined..
Response to Arguments
Applicant's arguments filed 09/29/2025 have been fully considered as follows:
Applicant’s arguments with respect to claim 1 (also representative of claims 10 and 17) state that
“Applicant respectfully asserts that Weisman does not teach or suggest identifying, by the device executing one or more computer-implemented artificial intelligence models, a portion of the transcript file that corresponds to the topic, as recited by the amended independent claims.”
Applicant’s arguments above with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s arguments with respect to claim 1 (also representative of claims 10 and 17) state that
“Further, the Weisman and Mahajan fail to teach or suggest causing display, by the device, of the output with the deep-linking IO and the portion of the transcript file on a display screen of a user device, wherein the portion of the transcript file comprises segmented sections of text including one or more hyperlinked entities, as recited by the amended independent claims… Further, Weisman and Mahajan also fail to teach or suggest in response to a second interaction with the deep-linking IO at the display screen comprising selection of a first segment of the segmented sections, rendering, by the device, an audio portion of the audio file related to the first segment and one or more images associated with a speaker in the audio portion, wherein the audio portion for… the portion corresponds to the topic is discussed, as generally recited by the amended independent claims”
The examiner respectfully disagrees, Mahajan teaches “Factoid section 816 identifies factoids that are mentioned in A/V content 104. The factoids are each associated with a designator 818, and the designator appears on the topic section 812 at the point where the factoid is mentioned in the A/V content” in (Mahajan col 11 lines 22-35, see Mahajan Fig. 2A, Fig. 6 and Fig. 8) and which discusses the factoids to the entities mentioned in the transcript and the related timelines. Weisman in view of Mahajan teach the relevant portions as indicated in this office action. Further the request for the support of the amendment related to “in response to a second interaction with the deep-linking IO at the display screen comprising selection of a first segment of the segmented sections, rendering, by the device, an audio portion of the audio file related to the first segment and one or more images associated with a speaker in the audio portion, wherein the audio portion for the speaker begins from the first instance where the portion corresponds to the topic is discussed;” is requested to further the prosecution of this amendment.
In response to the art rejection(s) of the remainder of dependent claims are rejected under 35 U.S.C 103, in case said claims are correspondingly discussed and/or argued for at least the same rationale presented in Remarks filed 09/29/2025, Examiner respectfully notes as follows. For completeness, should the mentioned claims be likewise traversed for similar reasons to independent claims 1, 10 and 17 correspondingly, Examiner respectfully directs Applicant to the same previous supra reasons provided in the response directed towards claims 1, 10 and 17 correspondingly discussed above. For at least the same supra provided reasons, Examiner likewise respectfully disagrees, and Applicant's arguments have been fully considered but they are not persuasive.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 10 and 17 and the dependent claims 2-9, 11-16 and 18-20 are rejected under 35
U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the
specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The support for the amendments in independent claims 1, 10 and 17 line 16 regarding “in response to a second interaction with the deep-linking IO at the display screen comprising selection of a first segment of the segmented sections, rendering, by the device, an audio portion of the audio file related to the first segment and one or more images associated with a speaker in the audio portion, wherein the audio portion for the speaker begins from the first instance where the portion corresponds to the topic is discussed;” is not supported by the specifications. Upon review of the Specification, the Examiner was uncertain where there is support in the specification for the newly amended limitations and request that in a next response, Applicant indicate the respective supporting portions of the specification. The amendments to claim 1 is inferred as being based on Specifications [0123-0131] regarding the output of the identified portion of the identified transcript but there is no information on the images associated with the speaker in the audio portion nor the information on the audio portion for the speaker begins from the first instance where the portion corresponds to the topic is discussed. The dependent claims 2-9, 11-16 and 18-20 are also rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement due to the lack of support to the amendments in the independent claims 1, 10 and 17 respectively.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-7, 9-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Weisman et. al. US PgPub. 2023/0105830 in view of Mahajan et. al., US Patent 7,640,272 further in view of Joller et. al. US PgPub.2022/0139398.
Regarding claim 1, Weisman teaches a method, comprising: in response to a first interaction with a set of search results related to a topic, selecting, by a device, an audio file associated with the first interaction with the set of search results (see Weisman, [0039], search engine platform 130 may allow a user to perform an Internet search, which can include searching media items 121. Search engine platform 130 may include a website (e.g., a webpage) or application back-end software that may be used to provide a user with access to the Internet, including media items 121, Weisman, Fig. 8); identifying, by the device, a transcript file that corresponds to the audio file, the transcript file comprising functionality that enables rendering of the audio file (see Weisman, [0078] describes identifying the transcript file corresponding to the podcast episode( audio file) and processing the meta data for the rendering of the audio file); generating, by the device, a deep-linking interface object (IO) that links the portion of the transcript file, the topic, and the rendering of the audio file(see Weisman, [0044] describes the podcast episode's URL, and/or can be an identifier that links to the podcast episode's URL(deep-linking IO). In some implementations, the podcast episode fetcher 144 can further extract certain metadata from the identified podcast episode and store the extracted metadata as attributes in data store 110; Weisman, [0059] The attribute extraction module 203 can extract attributes from podcast episodes and/or video content items. Attributes can include, and are not limited to, a transcript of the audio, the audio content, a title, a description, a duration, and/or a publication date. Some of the attributes, such as the title and description, can be stored in the podcast episode and/or video content item metadata); generating, by the device, an output with the deep-linking IO and of the identified transcript based on the identification of the portion, the output comprising a configuration of the transcript file(see Weisman, [0059] the podcast episodes 211 can be a table that stores podcast episode identifiers along with associated podcast episode attributes, and video content items 212 can be a table that stores video content item identifiers along with associated video content item attributes(attributes include transcript and other related items of the audio as listed in same paragraph)); causing display, by the device, of the output with the deep-linking IO and the portion of the transcript file on a display screen of a user device wherein the portion of the transcript file comprises segmented sections of text including one or more hyperlinked entities (see Weisman, Fig. 8, and Weisman, [0095] At block 1240, processing logic can provide, to a user device, information associated with the matching podcast episode identifier and with the video content item identifier. In some implementations, the information associated with the matching podcast episode identifier and with the matching video content item identifier can be provided in response to receiving a search query from a user device; see Weisman, [0072] displays additional links to podcasts ); and causing, by the device, display of the one or more images and the audio portion associated with the second interaction generated output on a display screen of a user device (see Weisman, [0069] FIG. 5 illustrates an example GUI 500 of a content sharing platform optimized for audio content, such as a video content item matched to a podcast episode, in accordance with an implementation of the disclosure. In one example, a user can select a podcast episode from the “Good for Listening” section 401 illustrated in FIG. 4, and the podcast episode can be presented to the user as illustrated in FIG. 5 ).
However, Weisman fails to teach when the transcript file is displayed; in response to a second interaction with the deep-linking IO at the display screen comprising selection of a first segment of the segmented sections, rendering, by the device, an audio portion of the audio file related to the first segment and one or more images associated with a speaker in the audio portion wherein the audio portion for the speaker begins from the first instance where the portion corresponds to the topic is discussed. However, Mahajan teaches in response to a first interaction with a set of search results related to a topic, selecting, by the device, an audio file from the set of search results(see Mahajan, col 4 lines 11-26 shows the search results 132 can access the A/V content to the user interface along with the metadata ); identifying, by the device, a transcript file that corresponds to the audio file, the transcript file comprising functionality that enables rendering of the audio file when the transcript file is displayed (see Mahajan, Fig. 2A, Mahajan col. 5 lines 11-17 shows closed caption box 210 to display transcript ); identifying, by the device, a portion of the transcript file that corresponds to the topic (see Mahajan, col 11 lines 11-15 User interface 800 also includes a subject matter section 812. Subject matter section 812 has a corresponding topic index 814 that can be used to identify the topic, or subject matter, being discussed during the corresponding portion of the A/V content ); generating, by the device, an output with the deep-linking IO and the transcript file (see Mahajan, col 2 lines 36-39 In any case, presenting the A/V content along with the metadata driven displays and community metadata driven displays to the user is indicated by block 162 in FIG. 2. Mahajan, col 4 lines 11-18 describes the search results contain hyperlinks(deep-linking IO) to the contents identified relevant to the searches); causing display, by the device, of the output with the deep-linking IO and the portion of the transcript file on a display screen of a user device, wherein the portion of the transcript file comprises segmented sections of text including one or more hyperlinked entities (. Mahajan, Fig. 2A, illustrates the User interface showing the transcript and relevant markers. Mahajan, Fig. 8 displays the identified audio clips and related factoids/contexts ( transcripts); see Mahajan Fig. 2A, Fig. 6 and Fig. 8; col 11 lines 22-35 discusses all the links to the entities mentioned in the transcript and the related timelines); in response to a second interaction with the deep-linking IO at the display screen comprising selection of a first segment of the segmented sections, rendering, by the device, an audio portion of the audio file related to the first segment and one or more images associated with a speaker in the audio portion wherein the audio portion for the speaker begins from the first instance where the portion corresponds to the topic is discussed(see Mahajan, Fig. 2A shows the transcripts and segments in 210, 208 and speakers in section 220 and the based on the speaker selected the related clip(image) is displayed in 200, and topics/chapter are indicated in section 232 and indicated in Mahajan, Fig. 4).
Weisman and Mahajan are considered to be analogous to the claimed invention because they relate to methods for searching for media content. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Weisman on the processing of the metadata of audio/podcast files with the teachings of Mahajan to process the metadata of audio/video content which is used in generating user interface interaction components which allow a user to view subject matter in various segments of the A/V content and to interact with the A/V content based on the automatically generated metadata to improve the user ability to discover, navigate, and consume very large amounts of available A/V content ( see Mahajan, col 1 lines, 25-35 ).
However Weisman in view of Mahajan fail to teach identifying, by the device executing one or more computer-implemented artificial intelligence models, a portion of the transcript file that corresponds to the topic. However, Joller teaches identifying, by the device executing one or more computer-implemented artificial intelligence models, a portion of the transcript file that corresponds to the topic (see Joller, [0036-0037] discuss text segmentations and identification of topics in segments of text; Joller, [0041-0044] discusses identification of topics in transcribed texts using neural networks ( artificial intelligence models) ).
Weisman, Mahajan and Joller are considered to be analogous to the claimed invention because they relate to methods for searching for media content. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Weisman and Mahajan on the processing of the metadata of audio files used in generating user interface interaction components with the machine learning/natural language processing teachings of Joller to facilitate processing, analyzing, and/or structuring of longer-form audio content ( see Joller, [0006] ).
Regarding claim 2, Weisman in view of Mahajan further in view of Joller teach the method of claim 1, Mahajan further teaches analyzing the audio file by performing natural language processing (NLP), and identifying a set of terms mentioned via the audio file(see Mahajan, col 41-43 Natural language processor 322 performs natural language analysis on speech recognition results 400); determining a subset of terms from the set of terms corresponding to the topic (see Mahajan, col 3 lines 57-60 Identifying the keywords is indicated by block 158 in FIG. 2. In one illustrative embodiment, the keywords are identified by tagging them as keywords using any desired mechanism); identifying supplemental content for each term in the subset of terms (see Mahajan, col 3 line 45 – col 4 line 1 Once the keywords are identified, they are provided to search engine 112. Search engine 112 can be any desired information retrieval search engine configured to search, through network 116, various sources of content 118 and 120. See Mahajan, Fig. 2, 160 Once the keywords are identified, they are provided to search engine 112. Search engine 112 can be any desired information retrieval search engine configured to search, through network 116, various sources of content 118 and 120); and annotating each term in the subset of terms based on identified supplemental content (see Mahajan, col 4 lines 12-18 In another embodiment, search results 132 comprise hyperlinks to the content identified as relevant to the search. As is described later, the links can be selected by the user to navigate to the relevant content. Retrieving related web information (search results 132) for each chapter is indicated by block 160 in FIG. 2. The search results 132 can be stored in metadata store 114 as well ).
Regarding claim 3, Weisman in view of Mahajan further in view of Joller teach the method of claim 2, Mahajan further teaches wherein the transcript file comprises the annotated subset of terms (see Mahajan, col 3 lines 55-64 the keywords are identified as stored in metadata; first subset of terms).
Regarding claim 4, Weisman in view of Mahajan further in view of Joller teach the method of claim 2, Mahajan further teaches wherein the annotation enables a display of the identified supplemental content(see Mahajan, Fig. 2A, 206).
Regarding claim 5, Weisman in view of Mahajan further in view of Joller teach the method of claim 2, Mahajan further teaches wherein the search for supplemental content is respective at least one of a local library of audio file and remote network locations (see Mahajan, col 3 line 66- col 4 line 6 Search engine 112 can be any desired information retrieval search engine configured to search, through network 116, various sources of content 118 and 120. In the illustrative embodiment, search engine 112 simply searches the content based on the identified keywords and returns search results 132 based on the search. In another embodiment, search engine 112 performs the search while (or just before) the A/V content 104 is played by a user).
Regarding claim 6, Weisman in view of Mahajan further in view of Joller teach the method of claim 1, Mahajan further teaches identifying a set of speakers from the audio file, wherein identification of the set of speakers is based on detected audio characteristics for each speaker(see Mahajan, col 7 lines 32-lines 42 performs the speaker segmentation based on acoustic characteristics); identifying a set of terms within the audio file related to each speaker in the set of speakers (see Mahajan, col 7 lines 36- 38 Audio analyzer 320 thus clusters segments of speech into a plurality of clusters wherein each cluster belongs to a single speaker ); determining portions of the audio that correspond to a set of terms for each speaker (see Mahajan, col 7 lines 43-col 8 line 3 discusses identifying names, topics(set of terms) mentioned and annotating accordingly ); and segmenting the transcript file based on the determined portions, wherein the transcript file is a segmented version of the transcript file (see Mahajan, col 7 line 61- col 8 line 3 Natural language processor 322 performs natural language analysis on speech recognition results 400. This is indicated by block 460 in FIG. 5, and can be done for a variety of different reasons. For instance, natural language processor 322 can be used in segmenting A/V content 104 into topic segments. This is shown by block 404 in FIG. 4 and generating the topic segmentation is indicated by block 462 in FIG. 5. This can be merged with the chapter analysis function described above, or provided as an input to that function).
Regarding claim 7, Weisman in view of Mahajan further in view of Joller teach the method of claim 6, Mahajan further teaches detecting names mentioned within the audio file(see Mahajan, col 7 lines 48-52 Otherwise, the names of the speakers are often mentioned during speech. Analyzer 320 can simply use the names mentioned (such as during introductions or at other points in the A/V content stream) to identify which speakers correspond to which clusters); determining that at least one detected name corresponds to an identified speaker(see Mahajan, col 7 lines 46-52 For instance, where closed captioning is provided, the speaker names can be easily associated with clusters. Otherwise, the names of the speakers are often mentioned during speech. Analyzer 320 can simply use the names mentioned (such as during introductions or at other points in the A/V content stream) to identify which speakers correspond to which clusters ); and annotating the transcript file to indicate the at least one detected name when displayed, wherein the transcript file is an annotated version of the transcript file(see Fig. 2A, 208, 220 ).
Regarding claim 9, Weisman in view of Mahajan further in view of Joller teach the method of claim 1, Mahajan further teaches causing, by the device, communication over network to a third party platform, the communication comprising information related to the topic of the transcript file(see Mahajan, Fig. 6 interface 124, has Web results 604); receiving, by the device, a digital content item provided by the third party platform, the digital content item comprising content corresponding to the topic(see Fig. 6, 604 Web results, 628sponsored sites, User interface 124 also has a "recommended" section 606. Recommended section 606 illustratively includes other recommended A/V content that is related to the A/V content being displayed on screen 600 ); and causing, by the device, display of the digital content item in association with the transcript file (see Mahajan, Fig. 6 Summary section 610 illustratively includes a short textual summary of the A/V content being displayed).
Regarding claim 10, is directed to a non-transitory computer-readable storage medium claim corresponding to the method claim presented in claim 1 and is rejected under the same grounds stated above regarding claim 1.
Regarding claim 11, is directed to a non-transitory computer-readable storage medium claim corresponding to the method claim presented in claim 2 and is rejected under the same grounds stated above regarding claim 2.
Regarding claim 12, is directed to a non-transitory computer-readable storage medium claim corresponding to the method claim presented in claim 3 and is rejected under the same grounds stated above regarding claim 3.
Regarding claim 13, is directed to a non-transitory computer-readable storage medium claim corresponding to the method claim presented in claim 4 and is rejected under the same grounds stated above regarding claim 4.
Regarding claim 14, is directed to a non-transitory computer-readable storage medium claim corresponding to the method claim presented in claim 6 and is rejected under the same grounds stated above regarding claim 6.
Regarding claim 15, is directed to a non-transitory computer-readable storage medium claim corresponding to the method claim presented in claim 7 and is rejected under the same grounds stated above regarding claim 7.
Regarding claim 17, is directed to a device claim corresponding to the method claim presented in claim 1 and is rejected under the same grounds stated above regarding claim 1.
Regarding claim 18, is directed to a device claim corresponding to the method claim presented in claim 2 and is rejected under the same grounds stated above regarding claim 2 .
Regarding claim 19, is directed to a device claim corresponding to the method claim presented in claim 6 and is rejected under the same grounds stated above regarding claim 6.
Regarding claim 20, is directed to a device claim corresponding to the method claim presented in claim 7 and is rejected under the same grounds stated above regarding claim 7.
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Weisman et. al. US PgPub. 2023/0105830 in view of Mahajan et. al., US Patent 7,640,272 further in view of Joller et. al. US PgPub.2022/0139398 further in view of Bleak US PgPub. 2023/0141096.
Regarding claim 8, Weisman in view of Mahajan further in view of Joller teach the method of claim 1, Mahajan teaches determining that the transcript file is not current based on a threshold (see Mahajan, col 11 lines 53-59 the user preference information is used to process the identified a/v content, this is interpreted as any form of customization including identifying the transcript is not current based on a threshold ) and generating another transcript file for the audio file, wherein the identified transcript is the other transcript(see Mahajan, col 12 lines 7-27 the a/v contents are stitched together and then the personalized program is presented along with the corresponding metadata ( including transcript as discussed earlier) – interpreted as generating another transcript ).
However, to further compact prosecution, Bleak is used to further teach determining that the transcript file is not current based on a threshold (see Bleak [0080] At block 406, the transcript data may be presented on a display of the device. At block 408, a revision (threshold) to the transcript data may be obtained obtaining at the device from the remote transcription system); and generating another transcript file for the audio file, wherein the identified transcript is the other transcript (see Bleak, [0082] [0082] At block 412, in response to the indication of the change to the presentation, the revision may be presented by the device. In some embodiments, the revision may be obtained by the device after the indication of the change to the presentation is obtained. Alternately or additionally, the revision may be obtained by the device before the indication of the change to the presentation is determined; interpreted as another transcript is generated ).
Weisman, Mahajan, Joller and Bleak are considered to be analogous to the claimed invention because they relate to methods for searching for media content. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Weisman in view Mahajan on the metadata of audio/video content which is used in generating user interface interaction components which allow a user to view subject matter in various segments of the A/V content and to interact with the A/V content based on the automatically generated metadata with the customization of the transcription processing teachings of Bleak to provide assistance to special needs people to participate in the audio communications ( see Bleak, [0002]).
Regarding claim 16, is directed to a non-transitory computer-readable storage medium claim corresponding to the method claim presented in claim 8 and is rejected under the same grounds stated above regarding claim 8.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ivers et. al., US PgPub. 2022/0208155 teaches visualization of audio content system (1300). Based on the textual elements, e.g., (1331-1333), algorithms directed to generating and suggesting visual content as described above, will offer matching visual assets (1340). In some embodiments, users, administrators, and automated processes/devices may select certain visual assets (1340) for pairing with the audio segment (1306, 1308) (see Ivers, Fig. 13A-C).
Pappu et. al., US PgPub. 2017/0062010 teaches a work flow example 600 for serving relevant digital content associated with advertisements (e.g., advertisement content) based, at least on, the transcription of digital content (see Pappu, Fig. 6).
PNG
media_image1.png
608
870
media_image1.png
Greyscale
S. Colbath, F, et. al., "Spoken documents: creating searchable archives from continuous
audio," Proceedings of the 33rd Annual Hawaii International Conference on System Sciences,
Maui, HI, USA, 2000 teaches Rough’n’Ready system for data discovery.( Colbath, pg. 5-7).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANDINI SUBRAMANI whose telephone number is (571)272-3916. The examiner can normally be reached Monday - Friday 12:00pm - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh M Mehta can be reached at (571)272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NANDINI SUBRAMANI/Examiner, Art Unit 2656
/BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656