DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims correspondence
Instant application
1, 3,
4
5
20
21
Application
18042259
1
24
4
14
1, 14
Claims 1. 3-5 and 20-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 14 and 24 of copending Application No. 18042259 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of claims of instant application are broader than the limitations of claims of copending application 18042259..
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Instant application
Claim 1 (method)
Co-pending application
18042259
Claim 1 (method)
A method of subtitle displaying
A method for caption rendering in a virtual reality space, comprising:
obtaining a subtitle content corresponding to a currently played virtual reality video frame
separating a caption content and a picture content on a currently displayed virtual reality (VR) video frame, and mapping and rendering the picture content to a VR panoramic space;
determining a target spatial position in a virtual reality panoramic space based on a current line-of-sight direction of a user
determining a target spatial position in the VR panoramic space according to a user's current LOS (line-of-sight) direction, wherein the user's current LOS direction is a ray starting from the user;
rendering a subtitle layer at the target spatial position based on the subtitle content, and synchronously rendering the subtitle content in the subtitle layer
rendering the caption content at the target spatial position to generate a spatial caption,
wherein determining a target spatial position in the VR panoramic space according to a user's current LOS direction comprises:
determining an initial position of a VR device worn by the user in the VR panoramic space as a center point position of the VR panoramic space, wherein the center point position is a
constant position not changing with a movement of the VR device in the VR panoramic space;
still taking the initial position as a center point position of the VR panoramic space in response to the VR device being moved;
obtaining a preset radius distance; and
starting from the center point position, taking a position extending to the preset radius distance in the user's current LOS direction as the target spatial position.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 10-16 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 10 recites “the displaying state information of the subtitle layer”. There is a lack of antecedent basis for the term “the displaying state information of the subtitle layer”
Claims 11-16 are also rejected by virtue of dependency.
Claim 18 recites, “adding a backlight sub-layer and a quantum dot matrix sub-layer under the subtitle layer”. The claims recites inserting two sub-layers under sub-title layer. It doesn’t make sense to insert the two sublayers under the subtitle layer. These sub-layers should be part of subtitle layer. According to Specification [0051]…. the subtitle layer is a backlight sub-layer and a quantum dot matrix sub-layer, ….. As shown in FIG. 3, the subtitle layer is formed by superposing a backlight sub-layer and a quantum dot matrix sub-layer”. The claim doesn’t follow specification’s disclosure.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3-7, 17 and 20-21 are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by HWANG et al. (US Pat. Pub. No. 20180332265 “Hwang”).
Regarding claim 1 Hwang teaches A method of subtitle displaying (ABSTRACT: “the present invention suggests a method of providing subtitles for a 360 content”), comprising:
obtaining a subtitle content corresponding to a currently played virtual reality video frame (“[0005]…. it is necessary to extend subtitle related features and subtitle related signaling information to be adapted to use cases of a VR service in order to provide subtitles suitable for 360-degree video”. [0492] According to one embodiment of the present invention, 3D subtitles for 3D video may be provided”);
determining a target spatial position in a virtual reality panoramic space based on a current line-of-sight direction of a user (“[0097]…. An apparatus such as a VR display can extract a viewport region on the basis of the position/direction of a user's head, vertical or horizontal FOV supported by the apparatus, etc. [0291] The num_of_rendering_positions field can signal the number of positions at which one subtitle is simultaneously rendered on the sphere. That is, this field can indicate the number of subtitle regions in which the corresponding subtitle is rendered. Detailed information about the subtitle regions can be provided depending on the number of subtitle regions indicated by this field. 0317] The 360-degree subtitle related metadata according to the present embodiment can specify a subtitle region by indicating the center point and a field of view (FOV) value of the subtitle region. Here, the 360-degree subtitle related metadata can indicate an offset value with respect to the center point of the viewport as the center point of the subtitle region. Accordingly, the 360-degree subtitle related metadata can specify the subtitle region at a relative position with respect to the viewport”).; and
rendering a subtitle layer at the target spatial position based on the subtitle content, and synchronously rendering the subtitle content in the subtitle layer (“[0316] According to one embodiment of the present invention, subtitles for 360-degree video may be rendered at a position varying according to viewports. In the present embodiment, the 360-degree subtitle related metadata can specify a subtitle region at a relative position of a viewport on the basis of the viewport. The viewport can refer to a region currently viewed by a user in 360-degree video as described above”).
Claim 20 is directed to an electronic device comprising a processor, a memory (“[0015] The apparatus for providing subtitles for a 360 content, according to other aspect of the present invention, comprising: a processor configured to generate 360 video data captured by at least one camera; a stitcher configured to stitch the 360 video data; a projection processor configured to project the 360 video data to a 2D image; a data encoder configured to encode the 2D image into a video stream; [0662] The module or unit may be one or more processors designed to execute a series of execution steps stored in the memory (or the storage unit)”) and its elements are similar in scope and functions of the method claim1 and therefore claim 20 is also rejected with the same rationale as specified in the rejection of claim 1.
Claim 21 is directed to a non-transitory computer readable storage medium (“ [0665] In addition, a method according to the present invention can be implemented with processor-readable code in a processor-readable recording medium provided to a network device. The processor-readable medium may include all kinds of recording devices capable of storing data readable by a processor”) and its elements are similar in scope and functions of the method claim1 and therefore claim 21 is also rejected with the same rationale as specified in the rejection of claim 1.
Regarding claim 3, Hwang teaches, wherein the determining the target spatial position in the virtual reality panoramic space based on the current line-of-sight direction of the user comprises: determining a central point position of the virtual reality panoramic space, and obtaining a predetermined radius distance; and extending the current line-of-sight direction of the user, from the central point position according to the predetermined radius distance to a position, and taking the position as the target spatial position (“[0317] The 360-degree subtitle related metadata according to the present embodiment can specify a subtitle region by indicating the center point and a field of view (FOV) value of the subtitle region. Here, the 360-degree subtitle related metadata can indicate an offset value with respect to the center point of the viewport as the center point of the subtitle region. Accordingly, the 360-degree subtitle related metadata can specify the subtitle region at a relative position with respect to the viewport”).
Regarding claim 4, Hwang teaches, wherein determining the target spatial position in the virtual reality panoramic space based on the current line-of-sight direction of the user comprises:
obtaining a historical space position corresponding to a subtitle content of a previous frame displayed in the virtual reality panoramic space; (“[0013] Preferably, when the type information indicates a second type, the offset center information indicates yaw, pitch and roll offsets between the center point of the subtitle region and a center point of a previous subtitle region specified by a previous 360 subtitle SEI message.”)
obtaining a line-of-sight change information between the current line-of-sight direction of the user relative to a line-of-sight direction for viewing the previous frame; (“[0011] Preferably, the offset region information includes offset center information indicating yaw, pitch and roll offsets for a center point of the subtitle region, range information indicating horizontal and vertical ranges of the subtitle region from the center point, and type information for indicating a type of the offset center information……..[0013] Preferably, when the type information indicates a second type, the offset center information indicates yaw, pitch and roll offsets between the center point of the subtitle region and a center point of a previous subtitle region specified by a previous 360 subtitle SEI message.”) and
determining the target spatial position based on the line-of-sight change information and the historical spatial position. (“[0011] Preferably, the offset region information includes offset center information indicating yaw, pitch and roll offsets for a center point of the subtitle region, range information indicating horizontal and vertical ranges of the subtitle region from the center point, and type information for indicating a type of the offset center information.”)
Regarding claim 5, Hwang teaches, wherein obtaining line-of-sight change information between the current line-of-sight direction of the user relative to the line-of-sight direction for viewing the previous frame comprises:
obtaining a horizontal axis rotation angle of a camera in a virtual reality device worn by the user relative to the previous frame in a horizontal direction, wherein the horizontal axis rotation angle is change information of the user from a horizontal line-of-sight direction for viewing the previous frame to a horizontal line-of-sight direction for viewing a current frame. (“[0019] Preferably, the offset region information includes offset center information indicating yaw, pitch and roll offsets for a center point of the subtitle region, range information indicating horizontal and vertical ranges of the subtitle region from the center point, and type information for indicating a type of the offset center information.”).
Regarding claim 6, Hwang teaches, wherein rendering the subtitle layer at the target spatial position based on the subtitle content comprises: obtaining a displaying quantity of the subtitle content; and rendering the subtitle layer that matches the displaying quantity (“[0582] The 360-degree subtitle related metadata according to the present embodiment may further include signaling information for changing a window size and a subtitle font size according to change in the depth/disparity of subtitles”).
Regarding claim 7, Hwang teaches, wherein rendering the subtitle layer that matches the displaying quantity comprises: determining a subtitle real-time width and a subtitle real-time height based on the displaying quantity, a predetermined unit subtitle width, and a predetermined unit subtitle height; in response to a width change of the subtitle real-time width, rendering a real-time subtitle layer width that matches the subtitle content based on a layer width of a unit subtitle and the subtitle real-time width; and/or in response to a height change of the subtitle real-time height, rendering a real-time subtitle layer height that matches the subtitle content based on a predetermined layer height of a unit subtitle and the subtitle real-time height (“[0395] A subtitle_region_unit field can indicate a unit that specifies a corresponding subtitle region. The unit specifying a subtitle region may include a percentage, a cell and a pixel according to an embodiment. [0582] The 360-degree subtitle related metadata according to the present embodiment may further include signaling information for changing a window size and a subtitle font size according to change in the depth/disparity of subtitles”).
Regarding claim 17, Hwang teaches, performing background addition displaying processing on the subtitle layer (“[0444] A background_color field may indicate a background color of corresponding subtitles. The background color of the subtitles can be changed using the value of this field. This field may replace tts:backgroundColor in TTML”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Hwang in view of Dorn et al. (US Pat. Pub. No. 20220174361 “Dorn”).
Regarding claim 2 Hwang is silent about performing a speech recognition processing on an audio stream corresponding to the currently played virtual reality video frame to obtain the subtitle content; or querying a predetermined database to obtain a subtitle content corresponding to the currently played virtual reality video frame.
Dorn teaches querying a predetermined database to obtain a subtitle content corresponding to the currently played virtual reality video frame (“[0072]….The annotation processor dynamically generates the annotated video content stream by querying the annotation database 205 and receiving the annotation layers with the annotations generated by select ones”).
Hwang and Dorn are analogous art as both of them are related to subtitle display.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Hwang by performing a speech recognition processing on an audio stream corresponding to the currently played virtual reality video frame to obtain the subtitle content; or querying a predetermined database to obtain a subtitle content corresponding to the currently played virtual reality video frame as taught by Dorn.
The motivation for the above is to reduce the generation time of subtitle.
Claim(s) 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Hwang in view of Zhang (US Pat. Pub. No. 20190394419 “Zhang”).
Regarding claim 10 Hwang is silent about in response to a detection that a further layer is displayed in the virtual reality panoramic space, identifying displaying state information of the further layer; and adjusting the displaying state information of the subtitle layer based on the displaying state information of the further layer.
Zhang teaches in response to a detection that a further layer is displayed in video frame, identifying displaying state information of the further layer; and adjusting the displaying state information of the subtitle layer based on the displaying state information of the further layer (“[0104] In an example implementation, the subtitle display region may be determined in the region in the video frame other than the key region according to the subtitle information. For example, when the subtitle displayed according to the display position information in the subtitle information may block the key region, the subtitle display position determined according to the display position information may be adjusted to the region in the video frame other than the key region, such that the subtitle displayed in the adjusted subtitle display region will not block the key content in the video frame”);
Hwang and Zhang are analogous art as both of them are related to subtitle display.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Hwang by having in response to a detection that a further layer is displayed in the virtual reality panoramic space, identifying displaying state information of the further layer; and adjusting the displaying state information of the subtitle layer based on the displaying state information of the further layer similar to having, in response to a detection that a further layer is displayed in video frame, identifying displaying state information of the further layer; and adjusting the displaying state information of the subtitle layer based on the displaying state information of the further layer as taught by Zhang.
The motivation for the above is to enhance viewing experience of a viewer.
Regarding claim 11 Hwang modified by Zhang teaches wherein the displaying state information comprises a space position for display, and adjusting the displaying state information of the subtitle layer based on the displaying state information of the further layer (Zhang “[0092] Subtitle information may merely include subtitle content. A subtitle may be added to the video frame after a display position of the subtitle is determined as required”) comprises:
determining whether a reference spatial position where the further layer is located and the target spatial position satisfy a predetermined occlusion condition (Zhang “[0102] In an example implementation, the subtitle display region may be determined in the region in the video frame other than the key region without considering the subtitle information. The subtitle display region may be determined according to preset parameters (such as a size and a position)”);
in response to the occlusion condition being satisfied, determining a target moving position and/or a target layer displaying size of the subtitle layer, wherein the subtitle layer corresponding to the target moving position and/or the target layer displaying size and the further layer do not satisfy the occlusion condition; and displaying the subtitle layer based on the target moving position and/or the target layer displaying size (Zhang “[0160] In an example implementation, the subtitle display region may also be determined in the region in the video frame other than the key region according to the subtitle content and display position information in the subtitle information. When the subtitle displayed according to the display position information in the subtitle information may block the key region, a subtitle display region of which the size is proportional to the number of words may be directly determined in the region in the video frame other than the key region according to the number of words in the subtitle content. The subtitle display region determined according to the display position information in the subtitle information may also be adjusted to the region in the video frame other than the key region.
[0161] When the subtitle displayed according to the display position information in the subtitle information will block the key region, the subtitle display position determined according to the display position information may not be adjusted, and the final subtitle display region is determined directly according to the display position information”).
Regarding claim 12 Hwang modified by Zhang teaches wherein the method further comprises: before adjusting the displaying state information of the subtitle layer based on the displaying state information of the further layer, determining a layer level of the further layer, and determining that the layer level is higher than a predetermined level threshold (Zhang “[0132] In an example implementation, when a large target object is determined in the video frame (that is, the proportion of the target object in the video frame exceeds a threshold), or multiple target objects are determined in the video frame, the proportion of the key region in the video frame is also large, finally resulting in an undesirable display position of the subtitle. For example, a video frame C is a close-shot of a racing car picture, and a display region where the racing car is located occupies 80% of the area in the video frame C. If the key region is determined merely according to the target object, the subtitle may only be displayed on two lateral sides or at the top of the video frame, and the display position of the subtitle is undesirable”).
Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Hwang modified by Zhang as applied to claim 11 above and further in view of LIU et al. (US Pat. Pub. No. 20230359351 “Liu”).
Regarding cli 16Hwang modified by Zhang is silent about in response to a detection of an instruction for disabling displaying of the further layer, controlling the subtitle layer to move to the target spatial position for display.
Liu teaches in response to a detection of an instruction for disabling displaying of a further layer, controlling second layer to move to a target spatial position for display (“[0810] When the user does not need to use the control region temporarily, display of the control region is temporarily disabled, so that a display area of another application on the second display can be expanded. If the control region is not needed, interference of the control region with the another application on the second display is reduced, and a user operation interface is simplified”);
Liu and Hwang modified by Zhang are analogous art as both of them are related to image processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Hwang modified by Zhang by including in response to a detection of an instruction for disabling displaying of the further layer, controlling the subtitle layer to move to the target spatial position for display similar to including in response to a detection of an instruction for disabling displaying of a further layer, controlling second layer to move to a target spatial position for display as taught by Liu.
The motivation for the above is to place the subtitle layer at a position for better visibility.
Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Hwang in view of Zhang et al. (US Pat. Pub. No. 20210096362 “Zhang_Y”).
Regarding claim 18 Hwang is silent about adding a backlight sub-layer and a quantum dot matrix sub-layer under the subtitle layer; and/or rendering a lighting animation on the subtitle layer.
Zhang_Y teaches adding a backlight sub-layer and a quantum dot matrix sub-layer under the subtitle layer; and/or rendering a lighting animation on the subtitle layer (“[0094] For example, for active type (self-luminous type) light emitting display panels such as an OLED (organic light emitting diode display panel), a QLED (quantum dot light emitting diode display panel), a Mini LED (submillimeter light emitting diode), a Micro LED (micro light emitting diode) or the like, a region, which contains semantics (such as texts, icons, charts, etc.), of an input image may be selected as the information zone at least, for controlling brightness of pixels in the information zone and non-information zone when the display module performs display function. As understood easily, for active type (self-luminous type) light emitting display panels, a region, which contains semantics (such as texts, icons, charts, etc.), of an input image may also be selected as the information zone, for example, a rectangular region, a circular region or the like including texts, icons, charts and other semantics may be selected as the information zone.
[0095] For example, for a liquid crystal display panel with a backlight, a suitable information zone may be selected according to the type of its backlight. For a liquid crystal display panel with a direct-light type backlight, a region, which contains semantics, of an input image (e.g., texts, icons, charts, etc.), such as a rectangular region”).
Hwang and Zhang_Y are analogous art as both of them are related to subtitle display.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Hwang by adding a backlight sub-layer and a quantum dot matrix sub-layer under the subtitle layer; and/or rendering a lighting animation on the subtitle layer as taught by Zhang_Y.
The motivation for the above is to provide better focus to the subtitle.
Allowable Subject Matter
Claims 8-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim 8 is objected because the combination of the best available prior arts fails to expressly teach the limitation of claim 8.
Claim 9 is objected by virtue of dependency.
Claims 13-16 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tapas Mazumder whose telephone number is (571)270-7466. The examiner can normally be reached M-F 8:00 AM-5:00 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TAPAS MAZUMDER/Primary Examiner, Art Unit 2615