Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-22 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,058,512. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant claims have the same scope as the patent claims.
18/764,296- Claim 1
US 12,058,512 - Claim 1
An information processing apparatus comprising:
a processor; and
a memory built in or connected to the processor, wherein the processor
An information processing apparatus comprising:
a processor; and
a memory built in or connected to the processor, wherein the processor
acquires a plurality of pieces of sound information indicating sounds obtained
by a plurality of sound collection devices, a sound collection device position information
indicating a position of the plurality of sound collection devices, and a target region position
information indicating a position of a target region,
acquires a plurality of pieces of sound information indicating sounds obtained by a plurality of sound collection devices, a sound collection device position information indicating a position of each of the plurality of sound collection devices, and a target subject position information indicating a position of a target subject in an imaging region,
specifies a target sound of a region corresponding to the position of the target
region based on the acquired sound collection device position information and the acquired
target region position information, and
specifies a target sound of a region corresponding to the position of the
target subject from the plurality of pieces of sound information based on the acquired
sound collection device position information and the acquired target subject position
information, and
generates target region emphasis sound information indicating a sound
including a target region emphasis sound in which the specified target sound is emphasized
more than a sound emitted from a region different from the region corresponding to the position of the target region indicated by the acquired target region position information in a case in which a virtual viewpoint video is generated, based on viewpoint position information indicating a position of a virtual viewpoint, visual line direction information indicating a virtual visual line direction, angle-of-view information indicating an angle of view, and the target region position information, by using a plurality of images obtained by a plurality of imaging apparatuses in a plurality of directions.
generates target subject emphasis sound information indicating a sound including a target subject emphasis sound in which the specified target sound is emphasized more than a sound emitted from a region different from the region corresponding to the position of the target subject indicated by the acquired target subject position information in a case in which a virtual viewpoint video is generated, based on
viewpoint position information indicating a position of a virtual viewpoint with respect to the imaging region, visual line direction information indicating a virtual visual line direction with respect to the imaging region, angle-of-view information indicating an angle of view with respect to the imaging region, and the target subject position information, by using a plurality of images obtained by imaging the imaging region by a plurality of imaging apparatuses in a plurality of directions.
Claims 2-22 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2-17, 20, 1, 1, 18, and 19, respectively, of U.S. Patent No. 12,058,512. Although the claims at issue are not identical, they are not patentably distinct from each other because the ‘512 patent claims recite substantially the same limitations in claims with narrower scope. For instance, instant dependent claim 19 narrows the scope of instant claim 1 by incorporating features also recited in claim 1 of the ‘512 patent.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, and 12-22 are rejected under 35 U.S.C. 103 as being unpatentable over Funakoshi, US 2018/0035235 A1, in view of Aoyama, US 2020/0358415 A1.
Regarding claim 1, Funakoshi discloses: an information processing apparatus comprising:
a processor (See [0150].); and
a memory built in or connected to the processor ([0150]), wherein the processor acquires a plurality of pieces of sound information indicating sounds obtained by a plurality of sound collection devices (As disclosed in [0028] with respect to figure 2, sound is picked up from a plurality of microphones 101 distribute across an arena.), a sound collection device position information indicating a position of the plurality of sound collection devices (As disclosed in [0029] with respect to figure 2, in according with a listening range, listening point, and listening direction output from a listening range decision unit 2, a sound pickup point selection unit 3 selects sound pick up points (collection devices 101 in figure 2) to be used to generate an audio reproduction signal.), and a target region position information indicating a position of a target region (As disclosed in [0029], object position detection unit 9 in figure 2.),
specifies a target sound of a region corresponding to the position of the target region based on the acquired sound collection device position information and the acquired target region position information (See step 102, as disclosed in [0042]: audio at the sound pickup points is acquired, and the sound pickup signal input unit 1, for example, amplifies the sound pickup signals of the plurality of microphones, and removes noise.”)
A “listening range” as shown in figure 6a corresponds to a region corresponding to the position of the target region.), and
Funakoshi does not disclose:
generates target region emphasis sound information indicating a sound including a target region emphasis sound in which the specified target sound is emphasized more than a sound emitted from a region different from the region corresponding to the position of the target region indicated by the acquired target region position information in a case in which a virtual viewpoint video is generated (See [0148], disclosing different sound mixing strategies based on direction of a target object within the target region (e.g. a region Z1 in figure 1). [0230] in Aoyama discloses generating and outputting a sound suitable for an image displayed as a zoom image.), based on viewpoint position information indicating a position of a virtual viewpoint (See [0022], “viewpoint location.” More specifically, in [0070] it is disclosed that a viewing direction within a Head-Mounted Display (HMD) determines an object displayed in the area of image content corresponding to the viewpoint location as an object requiring gain adjustment (sound emphasis).), visual line direction information indicating a virtual visual line direction (More specifically, the HMD 15 includes an acceleration sensor, a gyro sensor, and the like. In response to a change in the direction or position of the user's head with the HMD.), angle-of-view information indicating an angle of view (See [0231], disclosing it is also possible to generate and output an appropriate sound corresponding to the angle of an image to be displayed,), and the target region position information (See S72, “Zoom position” in figure 13), by using a plurality of images obtained by a plurality of imaging apparatuses in a plurality of directions (See Aoyama [0231], which discloses with respect to a sound generating process of [0230], “plurality of the cameras 12a may be used to capture images from various angles. In this case, it is possible to generate and reproduce an image corresponding to an angle from which no image has been captured, by using, for example, the images captured by the plurality of cameras 12a.).
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate into the camera viewpoint-based audio processing of Funakoshi the further feature of selectively applying gain to audio signal(s) within a camera/HMD field of view, based on zoom/level ([0230]), and relative positions and postures of sound sources with respect to a camera viewpoint and distance ([0108]-[0109]), as disclosed in Aoyama, in order to provide a more realistic and immersive audio experience that synchronizes with the image data of a HMD or virtual camera moving within an arena. See Aoyama [0055]-[0056].
Regarding claim 2, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 2. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein the processor selectively executes a first generation process of generating the target subject emphasis sound information (Step 702, 707 in figure 11.), and a second generation process of generating, based on the acquired sound information, integration sound information indicating an integration sound obtained by integrating a plurality of the sounds obtained by the plurality of sound collection devices (Steps 703-706.).
Regarding claim 3, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 2, upon which depends claim 3. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 2, wherein the processor executes the first generation process in a case in which the angle of view indicated by the angle-of-view information is less than a reference angle of view (As shown in figure 5, S202, if depression angle of the viewpoint is less than -10 degrees from the horizontal, a first sound generation process is executed, beginning at S203, and using a range calculated based on a projection of the angle of view onto a horizonal plane.), and executes the second generation process in a case in which the angle of view indicated by the angle-of-view information is equal to or more than the reference angle of view (As shown in S205 of figure 5, a second sound generation process begins if the depression angle is at least -10 degrees from the horizontal, based on a range surrounding the object position.).
Regarding claim 4, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 4. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein indication information for indicating a position of a target subject image showing the target subject in an imaging region image showing the imaging region is received by a reception device in a state in which the imaging region image is displayed by a display device (As disclosed in [0072], Object or objects position information within an image is stored in RAM as object information.), and
the processor acquires the target subject position information based on correspondence information indicating a correspondence between a position in the imaging region and a position in the imaging region image showing the imaging region, and the indication information received by the reception device ([0073] discloses in Steps 303 to S306 loop processing for each object information item extracted in S302. “Step S305, based on the plurality of sets of camera position coordinates and the plurality of object directions obtained in step S304, the position coordinates of the processing target object are calculated by triangulation.”).
Regarding claim 12, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 12. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein the target subject emphasis sound information is information indicating a sound including the target subject emphasis sound and not including the sound emitted from the different region (See [0067],).
Regarding claim 13, Funakoshi discloses the limitations of claim 1, upon which claim 13 depends. Funakoshi further discloses: the information processing apparatus according to claim 1, wherein the processor specifies a positional relationship between the position of the target subject and the plurality of sound collection devices by using the acquired sound collection device position information and the acquired target subject position information (See [0075] disclosing detecting position of an object in an arbitrary viewpoint image.),
and
Funakoshi does not disclose:
the sound indicated by each of the plurality of pieces of sound information is a sound adjusted to be smaller as the sound is positioned farther from the position of the target subject depending on the positional relationship specified by the processor.
However, Aoyama discloses in an analogous prior art an information processing apparatus that adjusts and mixes a direct sound channel (e.g. emanating from a player), and a reverberant sound channel according to the direction of the player with respect to a viewing/listening position in a zoom image. See abstract. In this context, Aoyama further discloses adjusting the volume gain for a player to be greater and a reverberant sound to be smaller, as the viewpoint becomes relatively closer to the player, as during a zoom operation. (In the case of a zoom image, reverberant sound has a relatively farther apparent distance from a player than when an image is zoomed out such that the player is as far away as much of the rest of the stadium.) See figure 10, and [0153]-[0155].
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate the zoom-based gain mixing disclosed in Aoyama, in order to enhance the viewer’s experience of feeling as if he/she viewed the object and heard the sound at the viewpoint location of the image reproduced as the zoom image, providing a more immersive experience. See Aoyama [0056].
Regarding claim 14 the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 14. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein a virtual viewpoint target subject image showing the target subject included in the virtual viewpoint video is an image that is in focus more than images in a periphery of the virtual viewpoint target subject image in the virtual viewpoint video (See [0072], which discloses with respect to figure 7 in step 302 an arbitrary viewpoint image is analyzed, and in-focus objects are detected and extracted and features thereof stored as objection information.).
Regarding claim 15, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 15. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein the sound collection device position information is information indicating the position of the sound collection device fixed in the imaging region (See figure 4D, in connection with [0056], lines 1-8, which disclose listening point information.).
Regarding claim 16, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 16. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein at least one of the plurality of sound collection devices is attached to the target subject.
However, Aoyama discloses this feature in an analogous art. See [0063], where Aoyama discloses employing the invention in a situation where a plurality of terminals 11 in figure 3, containing a sound acquisition unit 52, are attached to performers, players, actors, etc., which are thusly considered sound source objects in the image content.
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate into Funakoshi wearable sound collection devices attached to persons featuring as objects in an image for sound mixing purposes, as disclosed in Aoyama, in order to provide continuous sound monitoring for a person, such as a musician, performer, actor, player, etc. Such a feature would have had predictable results and a reasonable expectation of success for a person of ordinary skill in the art, based on the disclosure of Aoyama.
Regarding claim 17, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 17. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein the plurality of sound collection devices are attached to a plurality of objects including the target subject in the imaging region.
However, Aoyama discloses this feature in an analogous art. See [0063], where Aoyama discloses employing the invention in a situation where a plurality of terminals 11 in figure 3, containing a sound acquisition unit 52, are attached to performers, players, actors, etc., which are thusly considered sound source objects in the image content.
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate into Funakoshi wearable sound collection devices attached to persons featuring as objects in an image for sound mixing purposes, as disclosed in Aoyama, in order to provide continuous sound monitoring for a person, such as a musician, performer, actor, player, etc. Such a feature would have had predictable results and a reasonable expectation of success for a person of ordinary skill in the art, based on the disclosure of Aoyama.
Regarding claim 18, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 18. This combination, specifically Aoyama, further discloses: the information processing apparatus according to claim 1, wherein the processor is configured to:
generate the target region emphasis sound information or output the generated target
region emphasis sound information, in cases in which an observation direction of a person observing an imaging region image is determined in a state where the imaging region image showing the imaging region is displayed on a display device, and
not generate the target region emphasis sound information or not output the generated
target region emphasis sound information, in cases in which the observation direction is not
determined in a state where the imaging region image is displayed on the display device (See [0070]: That is, the display unit 22 of the HMD 15 displays an area of the image content corresponding to the viewpoint location determined by the position and direction of the HMO 15. Then, an object displayed in the area of the image content corresponding to the viewpoint location is regarded as an object that requires adjusting the gain of a sound so that the sound corresponds to the viewpoint location.).
Regarding claim 19, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 19. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1, wherein the viewpoint position information, the visual line direction information, and the angle-of-view information are set with respect to an imaging region in which the target region is included (See S203 in figure 5, “Calculate range when projecting angle of view on horizontal plane, and set projection range as listening range.”).
Regarding claim 20, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 20. This combination, specifically Funakoshi, further discloses: the information processing apparatus according to claim 1,
wherein the plurality of images is obtained by imaging an imaging region in which the target region is included, by a plurality of imaging apparatuses in a plurality of directions (See Aoyama, [0231].).
Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Funakoshi in view of Farrell, US 2016/0080684 A1.
Regarding claim 5, the combination of Funakoshi in view of Aoyama discloses the limitations of claim 1, upon which depends claim 5. This combination does not disclose explicitly: the information processing apparatus according to claim 1, wherein an observation direction of a person who observes an imaging region image showing the imaging region is detected by a detector in a state in which the imaging region image is displayed by a display device, and
the processor acquires the target subject position information based on correspondence information indicating a correspondence between a position in the imaging region and a position in the imaging region image showing the imaging region, and a detection result by the detector.
However, Farrell discloses this feature in an analogous prior art. Farrell discloses in [0040] (vii) use of a personal eye tracker such as a head-up or eyeglass-type interface to make sound selections, in conjunction with (x), (xi) using object recognition to allow sound sources to be associated with annotated video automatically, in which a label corresponding to an object in the video image is associated with a sound source based on the analysis.
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate eye tracking, in order to refine the precision and accuracy of the sound source selection process in Funakoshi, and object recognition technology, in order to automatically associate objects in video with sound sources.
Regarding claim 6, the combination of Funakoshi in view of Aoyama in view of Farrell discloses the limitations of claim 5, upon which depends claim 6. The combination, specifically Farrell, further discloses: the information processing apparatus according to claim 5, wherein the detector includes an imaging element, and detects a visual line direction of the person as the observation direction based on an eye image obtained by imaging eyes of the person by the imaging element (See [0052], which discloses eye/gaze direction tracking is used to point to and activate auxels within the video display.).
Regarding claim 7, the combination of Funakoshi in view of Aoyama in view of Farrell discloses the limitations of claim 5, upon which depends claim 7. The combination, specifically Farrell, further discloses: the information processing apparatus according to claim 5, wherein the display device is a head mounted display mounted on the person, and the detector is provided on the head mounted display (See [0040], disclosing use of a personal heads-up, eyeglasses type interface as a pointing device.).
Aoyama discloses using an HMD, but does not disclose a detector. Incorporating the detector disclosed in Farrell into the HMD disclosed in Aoyama would have been obvious
Claim 8 is rejected 35 U.S.C. 103 as being unpatentable over Funakoshi in view of Blume, US 2019/0259873 A1.
Regarding claim 8, the combination of Funakoshi in view of Farrell discloses the limitations of claim 7, upon which depends claim 8. This combination does not disclose: the information processing apparatus according to claim 7, wherein a plurality of the head mounted displays are present, and the processor acquires the target subject position information based on the detection result by the detector provided on a specific head mounted display among the plurality of head mounted displays, and the correspondence information.
However, coordinated use of multiple head-mounted displays with a gaze-tracking function is disclosed in an analogous art by Blume. See abstract, [0014].
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate support multiple head-mounted displays in the spatial audio system of Funakoshi view of Farrell, as disclosed in Blume. Use of multiple head mounted displays each with its own gaze tracking and auxel detection would have had predictable results and a reasonable expectation of success.
Claims 9 and 10 are rejected 35 U.S.C. 103 as being unpatentable over Funakoshi, in view of Aoyama, in view of Farrell, in view of Frieding, US 11/057,720 B1.
Regarding claim 9, the combination of Funakoshi, in view of Aoyama, in view of Farrell discloses the limitations of claim 5, upon which depends claim 9. This combination does not disclose: the information processing apparatus according to claim 5, wherein the processor does not generate the target subject emphasis sound information in a case in which a frequency at which the observation direction changes per unit time is equal to or more than a predetermined frequency
However, an analogous art by Frieding is directed to a remote microphone arrangement for an auditory prosthesis based on a desired/preferred listening direction, in which a region of interest is determined for sound emphasis based on gaze direction and/or head movement of the user. See abstract and col. 13, lines 51-62. In this context, Frieding discloses in column 16, lines 18-36 an “attack/release time” to make the system ignore sufficiently rapid changes in a user’s direction of focus: “For example, in some embodiments, the direction of focus of the remote device 103 should not change if the recipient quickly glances at a clock or other direction for only a brief period of time and/or if recipient quickly glances in a direction without a sound source. In other words, quick head movements and/or brief re-directions of focus need not trigger a direction/focus update on the remote microphone device 103.
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate the “attack/release” feature disclosed in Frieding into the sound processing method and system of Funakoshi in view of Farrell, in order to prevent spurious sound emphasis adjustments from being made, by filtering out user movements corresponding to momentary changes of attention, thereby providing a less disruptive listening experience.
Allowable Subject Matter
Claim 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The prior art does not disclose or suggest:
the information processing apparatus according to claim 5, wherein the processor
generates comprehensive sound information indicating a comprehensive sound obtained by integrating a plurality of the sounds obtained by the plurality of sound collection devices, and intermediate sound information indicating an intermediate sound in which the target sound is emphasized more than the comprehensive sound and suppressed more than the target subject emphasis sound, and
outputs the generated comprehensive sound information, the generated intermediate sound information, and the generated target subject emphasis sound information in order of the comprehensive sound information, the intermediate sound information, and the target subject emphasis sound information in a case in which a frequency at which the observation direction changes per unit time is equal to or more than a predetermined frequency.
The closest prior art, Aoyama, discloses mixing a reverberant stadium sound with a sound of one or more players, (depending on their proximity to one another. See [0214]), with the level of gain varying depending on the zoom magnification. See figure 10 and [0151]. However, there is no disclosure or suggestion to output the reverberant sound before the player sound in a case in which a frequency at which the observation direction changes per unit time is equal to or more than a predetermined frequency.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE M LOTFI whose telephone number is (571)272-8762. The examiner can normally be reached 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KYLE M LOTFI/ Examiner, Art Unit 2425