DETAILED ACTION
This action is in response to the claims filed 01/30/2026 for Application number 16/862,976. Claims 1, 2, 4, 5, 7, 8, 10, 11, 13, 14-17, 19, 29, 22, 23, 25, 28, and 29 have been amended and claim 31 is new. Thus, claims 1-31 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/30/2026 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1,
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 1 is directed to a processor, which is directed to a manufacture, one of the statutory categories.
Step 2A Prong One Analysis: Each of the following limitation(s):
identify one or more objects previously rendered in two or more frames from a sequence of frames
generate one or more indications based, at least in part, on the identified one or more objects, and whether to reuse the one or more objects in one or more subsequent frames
as drafted, under its broadest reasonable interpretation, covers mental processes corresponding to an evaluation in the human mind.
Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer or other machinery as a tool to perform an abstract idea. See MPEP 2106.05(f). The recitation of additional element(s) of “One or more processors comprising circuitry to: use one or more neural networks to: are to be used at least partially by the one or more devices to determine…and cause the one or more devices to render the one or more objects in the one or more subsequent frames”, as drafted, is reciting generic computer components at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component or other machinery.
The claim further recites: “wherein the one or more indications are to be stored in a storage of one or more devices” and “to be displayed as content based, at least in part, on the one or more indications retrieved from the storage”. These limitations are insignificant extra-solution activities. Please see MPEP 2106.05(g).
Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computer components to perform the abstract idea amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a mere instruction to apply language cannot provide an inventive concept. Furthermore, the limitations of wherein the one or more indications are to be stored in a storage of one or more devices” and “to be displayed as content based, at least in part, on the one or more indications retrieved from the storage are well-understood, routine, and conventional, as evidenced by MPEP §2106.05(d)(II)(iv), “Storing and retrieving information in memory”.
These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and does not amount to significantly more. Even when considered in combination, these additional elements amount to mere instructions to apply the exception using generic computer components and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible.
Regarding claim 2,
Claim 2, dependent upon Claim 1, recites “text, indicative of the one or more objects” describe a mental process of corresponding to an evaluation, judgement, or a combination of, and
only recites new additional elements “wherein the circuitry is to further cause…”, as drafted, is reciting generic computer components at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component.
The claim further recites: to be stored in cache which is considered to be an insignificant extra-solution activity.
Additionally, the limitation of to be stored in cache is well-understood, routine, and conventional, as evidenced by MPEP §2106.05(d)(II)(iv), “Storing and retrieving information in memory”.
The claim is not patent eligible.
Regarding claim 3,
Claim 3, dependent upon Claim 1, recites “identify whether the one or more objects were previously rendered in the two or more frames by comparing text… to one or more indications stored in the cache for one or more previously-rendered objects” describe a mental process of corresponding to an evaluation, judgement, or a combination of, and recites additional elements “wherein the circuitry is further to…”, as drafted, is reciting generic computer components at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component.
The claim further recites: stored in cache which is considered to be an insignificant extra-solution activity.
Additionally, the limitation of stored in cache is well-understood, routine, and conventional, as evidenced by MPEP §2106.05(d)(II)(iv), “Storing and retrieving information in memory”.
The claim is not patent eligible.
Regarding claim 4,
Claim 4, dependent upon Claim 1, recites “encode text” describes a mental process of corresponding to an evaluation, judgement, or a combination of, and recites additional elements “wherein the circuitry is further to encode text, indicative of the one or more objects if previously rendered”, and “wherein a recipient of the transmission comprising the one or more devices having the one or more objects cached locally can utilize locally cached objects to generate at least one frame of two or more frames from the sequence of frames”, as drafted, is reciting generic computer components at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Further, the insignificant extra-solution activity of “in a transmission of two or more frames instead of including the one or more objects” are considered well known, routine, and conventional because of what is recited in the MPEP 2106.05(d)(II): “The courts have recognized the following computer functions as well‐ understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity... i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Regarding claim 5,
Claim 5, dependent upon Claim 1, does not recite any additional abstract ideas, and only recites new additional elements “wherein the one or more neural networks include one or more convolutional neural networks (CNNs) to analyze the two or more frames and one or more long short-term memory (LSTM) recurrent neural networks (RNNs) to encode and output the one or more indications comprising text indicative of the one or more objects”, as drafted, is reciting generic computer components at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component.
Regarding claim 6,
Claim 6, dependent upon Claim 1, recites “wherein the identified one or more objects include video objects, image objects, or audio objects” merely generally links the judicial exception to a field of use. Please see MPEP 2106.05(h).
The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claims 7-12, it recites features similar to claims 1-6 and are rejected for at least the same reasons therein.
Regarding Claims 13-18, it recites features similar to claims 1-6 and 7-12 and are rejected for at least the same reasons therein.
Regarding Claims 19-24, it recites features similar to claims 1-6, 7-12, and 13-18 are rejected for at least the same reasons therein.
Regarding Claims 25-30, it recites features similar to claims 1-6, 7-12, 13-18 and 19-24 are rejected for at least the same reasons therein.
Claim 31, dependent upon Claim 1, recites the insignificant extra-solution activity of ““to transmit, to the one or more devices, the one or more subsequent frames comprising the one or more objects based, at least in part, on the one or more indications.” is considered well known, routine, and conventional because of what is recited in the MPEP 2106.05(d)(II): “The courts have recognized the following computer functions as well‐ understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity... i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5-7, 11-13, 17-19, 23-26, and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Karianakis et al. ("US 20210073563 A1", hereinafter "Karianakis") in view of Golas et al. ("US 20200143550 A1", hereinafter "Golas").
Regarding claim 1, Karianakis teaches One or more processors (¶0063), comprising: circuitry to (¶0063-¶0064): use one or more neural networks to (¶0023, “deep Convolutional Neural Network (CNN)”):
identify one or more objects previously rendered in two or more frames from a sequence of frames (“An object re-identification method is disclosed. For each of a plurality of frames of a video, a quality of the frame of the video is assessed and a confidence that a previously-recognized object is present in the frame of the video is determined.” [¶0004]); and
generate one or more indications based, at least in part, on the identified one or more objects (“a confidence that a previously-recognized object is present in the frame of the video is determined” [¶0004, confidence corresponds to an indication]), and
cause the one or more devices to render the one or more objects in the one or more subsequent frames, to be displayed as content based, at least in part, on the one or more indications retrieved from the storage. (“When included, display subsystem 1206 may be used to present a visual representation of data held by storage subsystem 1204.” [¶0069])
However fails to explicitly teach wherein the one or more indications are to be stored in a storage location of one or more devices and are to be used at least partially by the one or more devices to determine whether to reuse the one or more objects in one or more subsequent frames
Golas teaches wherein the one or more indications are to be stored in a storage location of one or more devices and are to be used at least partially by the one or more devices (“In one embodiment, pixel data of frame “n” 215 of objects from frame n 220 is saved for possible reuse of pixel data in the next frame “n+1.” Additionally, vertex coordinate data is saved for use in determining a frame-to-frame motion vector of pixels. In one embodiment, the pixel data and vertex coordinates from frame n are stored in a buffer memory for use in the next frame n+1.” [¶0049]) to determine whether to reuse the one or more objects in one or more subsequent frames (“For example, some embodiments of the present disclosure may track objects across frames using application assistance along with driver resource tracking to reuse intermediate rendered results from one or more previous frames to assist the rendering of the current frame” [¶0105])
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Karianakis’s object re-identification method with the adaptive rendering of image frames as taught by Golas. One would have been motivated to make this modification in order to reuse some of the pixel data from previous frames to reduce the number of shading samples by utilizing hardware and software analysis. [¶0105, Golas]
Regarding claim 5, Karianakis/Golas teaches The one or more processors of claim 1, where Karianakis further teaches wherein the one or more neural networks include one or more convolutional neural networks (CNNs) to analyze the two or more frames (“At each time step, the object re-identification model 100 receives data derived from input video 102 via one or more cameras, calculates a feature vector using a frame-level model (f.sub.CNN) 106 that is based on a deep Convolutional Neural Network (CNN)” [¶0023]) and one or more long short-term memory (LSTM) recurrent neural networks (RNNs) to encode and output the one or more indications comprising text indicative of the one or more objects. (“On top of the CNN features, a recurrent model 110 includes a Long Short-Term Memory (LSTM) unit that models shortrange temporal dynamics.” [¶0023])
Regarding claim 6, Karianakis/Golas teaches The one or more processors of claim 1, where Karianakis teaches wherein the identified one or more objects include video objects, image objects, or audio objects. (“The object re-identification model 100 may be trained to re-identify any suitable number of different previously-recognized objects (e.g., different people)” [¶0041; corresponds to image objects])
Regarding claims 7, 11 and 12, they are substantially similar to claims 1, 5, and 6 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Regarding claims 13, 17 and 18, they are substantially similar to claims 1, 5, and 6 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Regarding claims 19, 23 and 24, they are substantially similar to claims 1, 5, and 6 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Claim 25 recites features similar to claim 1 and is rejected for at least the same reasons therein.
Claim 26 recites features similar to claim 2 and is rejected for at least the same reasons therein. Claim 26 additionally requires used to instruct a client device (Karianakis, ¶0060) to insert the one or more objects retried from local storage location in at least one frame of the two or more frames from the sequence of frames (Golas, “For example, some embodiments of the present disclosure may track objects across frames using application assistance along with driver resource tracking to reuse intermediate rendered results from one or more previous frames to assist the rendering of the current frame. [¶0105])
Same motivation to combine the teachings of Karianakis/Golas as claim 25.
Regarding claim 29, it is substantially similar to claim 5 respectively, and is rejected in the same manner, the same art, and reasoning applying.
Regarding claim 30, it is substantially similar to claim 6 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Regarding claim 31, Karianakis/Golas teaches The one or more processors of claim 1, Karianakis teaches wherein the circuitry is to cause one or more remote devices to transmit, to the one or more devices, the one or more subsequent frames comprising the one or more objects based, at least in part, on the one or more indications. (“At each time step, the object re-identification model 100 receives data derived from input video 102 via one or more cameras, calculates a feature vector using a frame-level model (f.sub.CNN) 106 that is based on a deep Convolutional Neural Network (CNN), for example.” [¶0023; note: It is inherent that the object re-identification model is being implemented on a device])
Claims 2-4, 8-10, 14-16, 20-22, 26, 27, and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Karianaki in view of Golas and further in view of Farooq et al. ("A Convolutional Baseline for Person Re-Identification Using Vision and Language Descriptions", hereinafter "Farooq").
Regarding claim 2, Karianakis/Golas teaches The one or more processors of claim 1, however fails to explicitly teach wherein the circuitry to further cause the one or more indications comprising text, indicative of the one or more objects to be stored in cache.
Farooq teaches wherein the circuitry to further cause the one or more indications comprising text, indicative of the one or more objects to be stored in cache. (“The task of person search shares the idea of using natural language descriptions for retrieving a person/pedestrian image. [pg. 3, B. Person Search, ¶1; note: To be stored in cache is merely an intended result and carries little to no patentable weight])
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Karianakis’/Golas’ teachings by encoding natural language descriptions to match with visual observations to perform reidentification as taught by Farooq. One would have been motivated to make this modification in order to use language description of an object to assist with identification of an object. [pg. 1, Introduction, ¶2, Farooq]
Regarding Claim 3, Karianakis/Golas teaches The one or more processors of claim 1, however fails to explicitly teach wherein the circuitry is further to identify whether the one or more objects were previously rendered in the two or more frames by comparing text, indicative of the one or more objects, to one or more indications stored in cache for one or more previously rendered objects.
Farooq teaches wherein the circuitry is further to identify whether the one or more objects were previously rendered in the two or more frames by comparing text, indicative of the one or more objects, to one or more indications stored in cache for one or more previously rendered objects. (“Figure 6 presents qualitative results on the crossRe-ID dataset. The results are shown for both separate and joint modelling. In the first two examples, there is no correct match for the given query for V × V case with separate modelling. However, it can be seen that with the help of language, correct matches are observed in top five retrieved images. Specifically, if we look at the second example, top match for the vision only (separate training) is influenced by the background of the image.” [pg. 9, G. Qualitative Results, ¶1])
Same motivation to combine the teachings of Karianakis/Golas/Farooq as claim 2.
Regarding Claim 4, Karianakis/Golas teaches The one or more processors of claim 1, where Karianakis further teaches in a transmission of two or more frames instead of including the one or more objects, wherein a recipient of the transmission having the one or more objects cached locally can utilize locally cached objects to generate at least one frame of two or more frames from the sequence of frames (“When included, display subsystem 1206 may be used to present a visual representation of data held by storage subsystem 1204. This visual representation may take the form of a graphical user interface (GUI). Display subsystem 1206 may include one or more display devices utilizing virtually any type of technology. In some implementations, display subsystem 1206 may include one or more virtual-, augmented-, or mixed reality displays.” [¶0069; discloses transmitting visual representations to VR displays/devices thus would teach a transmission of two or more frames])
However Karianakis/Golas fails to explicitly teach wherein the circuitry is further to encode text, indicative of the one or more objects if previously rendered,
Farooq teaches wherein the circuitry is further to encode text, indicative of the one or more objects if previously rendered, (“The task of person search shares the idea of using natural language descriptions for retrieving a person/pedestrian image. [pg. 3, B. Person Search, ¶1])
Same motivation to combine the teachings of Karianakis/Golas/Farooq as claim 2.
Regarding claims 8, 9 and 10, they are substantially similar to claims 2, 3 and 4 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Regarding claims 14, 15 and 16, they are substantially similar to claims 2, 3 and 4 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Regarding claims 20, 21 and 22, they are substantially similar to claims 2, 3 and 4 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Claim 26 recites features similar to claim 2 and is rejected for at least the same reasons therein. Claim 26 additionally requires used to instruct a client device (Karianakis, ¶0060) to insert the one or more objects retrieved from local storage location in at least one frame of the two or more frames from the sequence of frames (Golas, “For example, some embodiments of the present disclosure may track objects across frames using application assistance along with driver resource tracking to reuse intermediate rendered results from one or more previous frames to assist the rendering of the current frame. [¶0105])
Same motivation to combine the teachings of Karianakis/Golas/Farooq as claim 2.
Regarding claims 27 and 28, they are substantially similar to claims 3 and 4 respectively, and are rejected in the same manner, the same art, and reasoning applying.
Response to Arguments
Applicant's arguments filed 01/30/2026 have been fully considered but they are not persuasive.
Regarding the 35 U.S.C. §101 Rejection:
Applicant asserts the amended limitations of claim 1 recites a technological process requiring computational implementation and thus cannot be performed in the human mind. Examiner respectfully disagrees. As stated in §2106.04.(a)(2).III.C “A Claim That Requires a Computer May Still Recite a Mental Process”, Claims can recite a mental process even if they are claimed as being performed on a computer. As noted in the 101 rejection, the claim merely uses one or more processors comprising circuitry to use one or more neural networks as tools to perform the abstract idea. These additional elements of using a processor, device, neural network amount to mere instructions to apply the judicial exception using a computer or generic computer component to perform routine tasks (i.e. storing, identifying, rendering, reusing). Therefore, the additional elements are insufficient to integrate the abstract idea into a practical application or amount to significantly more.
Applicant asserts that amended claim 1 improves the functioning of a computer to use a specific computing environment by making data transmission more efficient and further to reduce computing resources needed to satisfy instructions related to data transmission. Examiner respectfully disagrees. Amended claim 1 fails to recite any specific details of any data transmission other than reciting in a generic and broad manner that the data is being stored in a storage of a device and retrieved from the storage. As noted in the 101 rejection above, this amounts to insignificant extra-solution activity and further is considered to be a well-understood, routine and conventional step as evidenced by MPEP §2106.05(d)(II)(iv), “Storing and retrieving information in memory”.
Furthermore, applicant asserts the technological improvement described in the present disclosure involves the use of neural networks to optimize the rendering and transmission of media content. However, the examiner asserts that the claims fail to recite any details that capture the technological improvement described by the applicant. The claims do recite any specify any details of how the neural network is being trained or optimized or being applied in a manner that would reflect an improvement over conventional training or optimizing of neural networks. Furthermore, the claims fail to recite any details of what the rendering process entails other than merely using a device to perform the rendering in a broad and generic manner. Therefore, the examiner asserts the claim is not patent eligible.
Regarding the 35 U.S.C. §103 Rejection:
Applicant argues that Karianakis fails to explicitly teach rendering subsequent frames. While the examiner agrees with this assertion, the rejection did not rely on Karianakis to teach this particular limitation. Furthermore, applicant argues that Golas fails to remedy the deficiencies of Karianakis. Examiner respectfully disagrees. Golas explicitly teaches the concept of rendering subsequent frames based on indications retrieved from storage. (“For example, some embodiments of the present disclosure may track objects across frames using application assistance along with driver resource tracking to reuse intermediate rendered results from one or more previous frames to assist the rendering of the current frame” [¶0105]). Furthermore, applicant argues Golas fails to teach or suggest a neural network and thus one of ordinary skill in the art would not have been predictably able to train and otherwise modify the neural networks of Karianakis. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, examiner asserts that the combination of Karianakis and Golas is proper because even though Golas does not explicitly mention the use of neural networks, the reference is within the same field of endeavor of Karianakis of image and graphics processing. Additionally, as noted in the office action there would be some motivation to combine the teachings of Karianakis/Golas because Golas states “Some embodiments of the present disclosure may combine the backwards and/or forwards error analysis within a frame to determine a desired shading rate for various shading computations and reuse some of the pixel data from previous frames to reduce the number of shading samples by utilizing hardware and software analysis”. [¶0105, Golas]
Furthermore, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Merely stating that a combination would not be possible because Golas does not specifically use a neural network while Karianakis does teach this specific feature does not present a persuasive argument. As noted above, the examiner asserts the combination of Karianakis/Golas does teach every feature recited in amended claim 1 and similarly recited independent claims.
Applicant’s arguments with respect to the rejections of the dependent claims have been fully considered but they are not persuasive as they rely upon the allowability of the independent claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL H HOANG whose telephone number is (571)272-8491. The examiner can normally be reached Mon-Fri 8:30AM-4:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL H HOANG/ Examiner, Art Unit 2122