DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The indicated allowability of claims 2 and 14 are withdrawn in view of the newly discovered reference(s) to Chao et al. U.S. Pub. No. 2021/0295801. Rejections based on the newly cited reference(s) follow.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 12, 13; 7, 8, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krutsch et al. U.S. Pub. No. 2016/0379331 in view of Chao et al. U.S. Pub. No. 2021/0295801.
Re: claims 1, 12 and 13 (which are rejected under the same rationale), Krutsch teaches
1. (Currently Amended) A method comprising: generating, at a first shader of a sequence of shaders of a first processing unit, a first snapshot representing an output of the first shader based on a frame; (“The GPU 170 includes one or more streaming multiprocessors 175-1 to 175-N... each of the streaming multiprocessors 175 may be configured as one or more programmable shaders (e.g., vertex, geometry, or fragment) each executing a machine code shading program... to perform image rendering operations. ”; Krutsch, [0027], Fig. 1)
Fig. 1 illustrates, for example, that the GPU includes plural streaming multiprocessors configured as shaders, such as vertex shaders, geometry shades or fragment shaders (sequence of shaders).
(“The vertex shader unit 225 outputs the transformed vertex data to the primitive assembler unit 225 [sic] and is further configured to output the transformed vertex data to a buffer, in particular a feedback buffer 162-3...”; Krutsch, [0041], Fig. 2)
The vertex shader (first shader of a sequence of shaders of a first processing unit) outputs transformed vertex data (generates a first snapshot representing an output of the first shader based on a frame) to buffer 162-3.
and validating the output of the first shader based on a comparison of the first snapshot with a first reference snapshot generated at a second processing unit different from the first processing unit. (“Referring now to Fig. 4, in order to allow for verifying the transformed vertices output by the vertex shader 225, at least a part of the transformed vertices further processed at the subsequent states of the graphics processing pipeline is additionally stored in a buffer such as a feedback buffer 162-3... At least a subset of the transformed vertex data stored in the feedback buffer 162-3 is compared with reference data... One, several or all subsets of transformed vertex data is compared with corresponding reference data at a comparator unit 300, which is arranged to output a fault indication signal in case the comparison of the transformed vertex data or at least a subset thereof and corresponding reference data do not match with each other.”; Krutsch, [0051], [0052], Fig. 4)
In order to verify (validating) the transformed vertices output by the vertex shader (the output of the first shader), the transformed vertices that have been stored in the buffer 162-3 are compared to reference data (based on a comparison of the first snapshot with the first reference snapshot). Krutsch is silent regarding the first reference snapshot being generated at a second processing unit different from the first processing unit, however, Chao teaches
(“a first generation unit 1102, configured to generate a first image corresponding to the original image by using the MIPmap function... a second generation unit 1202, configured to generate a second image corresponding to the original image by using a Mipmap function of a GPU graphics pipeline...”; Chao, [0125], [0129])
A first image (snapshot) is generated by a first generation unit (first processing unit) and a second image is generated at a second generation unit (second processing unit different from the first processing unit). Chao is combined with Krutsch such that the verifying of Krutsch is performed using the second generation unit of Chao. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Krutsch by adding the feature of the first reference snapshot being generated at a second processing unit different from the first processing unit, in order to improving an adjustment effect of luminance adjustment on an image, as taught by Chao ([0005]).
Claim 12 is a device analogous to the method of claim 1, is similar in scope and is rejected under the same rationale. Re: claim 12, Krutsch and Chao teach
12. (Currently Amended) A device, comprising a first processing unit configured to (“The graphics processing subsystem 150 includes a graphics processing unit (GPU) 170, a GPU local memory 160, and a GPU data bus 165. ”; Krutsch, [0023], Fig. 1)
Fig. 1 illustrates that the graphics processing subsystem includes a GPU (processing unit) and a GPU local memory (memory to store a reference snapshot).
Claim 13 is a medium analogous to the method of claim 1, is similar in scope and is rejected under the same rationale. Re: claim 13, Krutsch teaches
13. (Currently Amended) A non-transitory computer readable medium embodying a set of executable instructions, the set of executable instructions to manipulate at least one processor to: (“a non-transitory, tangible computer readable storage medium is provided bearing computer executable instructions for verifying the integrity of transformed vertex data generated by a graphics pipeline, wherein the instructions, when executing on one or more processing devices, cause the one or more processing devices to perform a method...”; Krutsch, [0115])
Re: claims 7 and 19 (which are rejected under the same rationale), Krutsch and Chao teach
7. (Original) The method of claim 1, wherein validating the output of the first shader comprises: comparing the output of the first shader to the first reference snapshot based on an error tolerance threshold. (“Referring now to Fig. 4, in order to allow for verifying the transformed vertices output by the vertex shader 225, at least a part of the transformed vertices further processed at the subsequent states of the graphics processing pipeline is additionally stored in a buffer such as a feedback buffer 162-3... At least a subset of the transformed vertex data stored in the feedback buffer 162-3 is compared with reference data... One, several or all subsets of transformed vertex data is compared with corresponding reference data at a comparator unit 300, which is arranged to output a fault indication signal in case the comparison of the transformed vertex data or at least a subset thereof and corresponding reference data do not match with each other.”; Krutsch, [0051], [0052], Fig. 4)
The transformed vertex data that has been output by the vertex shader (output of the first shader) is compared to reference data (first reference snapshot). And, based on the comparison, the comparator outputs a fault indication signal if the transformed vertex data does not match the corresponding reference data (based on an error tolerance threshold).
Re: claims 8 and 20 (which are rejected under the same rationale), Krutsch and Chao teach
8. (Original) The method of claim 1, wherein the first snapshot includes input data for the first shader to generate the first snapshot. (“The data assembler 220 is a fixed-function unit that collects vertex data for high-order surfaces, primitive, and the like, and outputs the vertex data to vertex shader unit 225... The vertex shader is a programmable execution unit that is configured to execute a machine code vertex shader program, transforming vertex data as specified by the vertex shader programs.”; Krutsch, [0041], [0042], Fig. 1)
The data assembler outputs vertex data (the first snapshot includes input data) to the vertex shader unit (for the first shader), which transforms the vertex data (to generates the first snapshot).
Claim(s) 4, 9, 16 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krutsch and Chao as applied to claims 1, 12 and 13 above, and further in view of Chen et al. U.S. Pub. No. 2018/0033114.
Re: claims 4, 16 and 22 (which are rejected under the same rationale), Krutsch and Chao are silent regarding generating the first snapshot comprises generating the first snapshot via a first application program interface (API); and the first reference snapshot is generated at a second API different from the first API, however, Chen teaches
4. (Original) The method of claim 1, wherein: generating the first snapshot comprises generating the first snapshot via a first application program interface (API); and the first reference snapshot is generated at a second API different from the first API. (“The kernel codes programmed in different programming frameworks are referred herein as different types of kernel codes. Correspondingly, APIs programmed in different programming frameworks are referred herein as different types of APIs... The GPU may receive commands from a driver module for executing a first kernel code of a first programming framework and a second kernel code of a second programming framework. The commands may include a first set of commands issued by a first API and a second set of commands issued by a second API... The GPU may assign a first set of shader cores to execute the first kernel code and assign a second set of shader cores to execute the second kernel code... The GPU then concurrently executes the first kernel code with the first set of shader cores and the second kernel code with the second set of shader cores according to decoded commands. ”; Chen, [0014])
The GPU assigns a first set of shader cores to execute the first kernel code that includes a first set of commands issued by a first API (first application program interface (API)). And, the GPU assigns a second set of shader cores to execute the second kernel code that includes a second set of commands issued by a second API (second API different from the first API). The output of the first set of shader cores is considered to be the first snapshot (generating the first snapshot via a first application program interface (API)). And, the output of the second set of shader cores is considered to be the first reference snapshot (the first reference snapshot is generated at a second API different from the first API). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Krutsch by adding the feature of generating the first snapshot comprises generating the first snapshot via a first application program interface (API); and the first reference snapshot is generated at a second API different from the first API, in order to provide high efficiency and reduce context switches such that the performance of a graphics system can be significantly improved, as taught by Chen ([0006]).
Re: claim 9, Krutsch and Chao are silent regarding generating the first snapshot comprises generating the first snapshot via a first programming framework; and the first reference snapshot is generated at a second programming framework different from the first programming framework, however, Chen teaches
9. (Original) The method of claim 1, wherein: generating the first snapshot comprises generating the first snapshot via a first programming framework; and the first reference snapshot is generated at a second programming framework different from the first programming framework. (“The kernel codes programmed in different programming frameworks are referred herein as different types of kernel codes. Correspondingly, APIs programmed in different programming frameworks are referred herein as different types of APIs... The GPU may receive commands from a driver module for executing a first kernel code of a first programming framework and a second kernel code of a second programming framework. The commands may include a first set of commands issued by a first API and a second set of commands issued by a second API... The GPU may assign a first set of shader cores to execute the first kernel code and assign a second set of shader cores to execute the second kernel code... The GPU then concurrently executes the first kernel code with the first set of shader cores and the second kernel code with the second set of shader cores according to decoded commands. ”; Chen, [0014])
The GPU assigns a first set of shader cores to execute the first kernel code that includes a first set of commands issued in a first programming framework. And, the GPU assigns a second set of shader cores to execute the second kernel code that includes a second set of commands issued in a second programming framework. The output of the first set of shader cores is considered to be the first snapshot (generating the first snapshot via a first programming framework). And, the output of the second set of shader cores is considered to be the first reference snapshot (the first reference snapshot is generated at a second programming framework different from the first programming framework). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Krutsch by adding the feature of generating the first snapshot comprises generating the first snapshot via a first programming framework; and the first reference snapshot is generated at a second programming framework different from the first programming framework, in order to provide high efficiency and reduce context switches such that the performance of a graphics system can be significantly improved, as taught by Chen ([0006]).
Claim(s) 5, 6, 11, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krutsch and Chao as applied to claims 1 and 13 above, and further in view of Lee et al. U.S. Pub. No. 2023/0326117.
Re: claims 5 and 17 (which are rejected under the same rationale), Krutsch and Chao are silent regarding generating, at a second shader of the sequence of shaders, a second snapshot representing an output of the second shader based on a frame; and validating the output of the second shader based on a comparison of the second snapshot with a second reference snapshot, however, Lee teaches
5. (Currently Amended) The method of claim 1, further comprising: generating, at a second shader of the sequence of shaders, a second snapshot representing an output of the second shader based on a frame; (“Shader inputs may either originate from the graphics application or they are the intermediate values that are output from a prior shader.”; Lee, [0061], Fig. 1)
Fig. 1 illustrates, for example, that the vertex shader and the fragment shader are a sequence of shaders.
(“Fragment shader 55a is a regular precision fragment shader (RPFS), and fragment shader 55b is a low precision fragment shader (LPFS), such that the precision of some instructions and registers in the RPFS 55a is higher than those in the LPFS 55b... The RPFS 55a may be used to generate an image with greater image fidelity than the LPFS 55b. For example, the RPFS 55a uses 32-bit floating-point registers and instructions to calculate the color of the pixel whereas the LPFS 55b uses 16-bit floating-point for the same registers and instructions. The RPFS 55a processes the pilot pixel 203 to produce a regular fidelity value for pixel 206. The LPFS 55b processes the pilot pixel 203 to produce a lower fidelity value for pixel 207.”; Lee, [0074], [0075], Fig. 3)
The low precision fragment shader (second shader of the sequence of shaders) generates an image (generates a second snapshot representing an output of the second shader based on a frame) and the regular precision fragment shader generates an image (second reference snapshot).
and validating the output of the second shader based on a comparison of the second snapshot with a second reference snapshot. (“Fragment shader 55a is a regular precision fragment shader (RPFS), and fragment shader 55b is a low precision fragment shader (LPFS), such that the precision of some instructions and registers in the RPFS 55a is higher than those in the LPFS 55b... The RPFS 55a may be used to generate an image with greater image fidelity than the LPFS 55b. For example, the RPFS 55a uses 32-bit floating-point registers and instructions to calculate the color of the pixel whereas the LPFS 55b uses 16-bit floating-point for the same registers and instructions. The RPFS 55a processes the pilot pixel 203 to produce a regular fidelity value for pixel 206. The LPFS 55b processes the pilot pixel 203 to produce a lower fidelity value for pixel 207.”; Lee, [0074], [0075], Fig. 3)
The image (second snapshot) generated by the low precision fragment shader (second shader) has a lower image fidelity (comparison) than the image (second reference snapshot) generated by the regular precision fragment shader (comparison of the second snapshot representing an output of the second shader based on a frame).
(“For example, vertex shader 45 outputs are either stored to memory or provided as input to rasterizer 50; rasterizer 50 outputs are provided as input to fragment shader 55;... Subsequent to the generation of vertex output data, the vertex output data then input to and processed by rasterizer 50. The vertex data may comprise both position data and associated color data for the vertices. Rasterizer 50 processes the vertex output data and generates fragment input data based on the vertex output data...The fragment input data therefore comprises position data and color data for every pixel contained within the triangle... The fragment input data is then input to a fragment shader 55 which processes the fragment input data and generates fragment output data based on the fragment input data.”; Lee, [0061], [0067], [0068], Fig. 1)
Fig. 1 illustrates that the vertex shader outputs vertex output data to the rasterizer, which generates fragment input data and outputs this fragment input data to the fragment shader.
(“The regular fidelity pixel value 206 is compared to the lower fidelity pixel value 207 to determine whether the LPFS 55b would produce an image of acceptable fidelity. The image fidelity may be acceptable if the differences between the two values is less than an error threshold.”; Lee, [0076])
The output pixel of the LPFS is determined to have an acceptable fidelity (validating) if the difference (comparison) between the output pixel (second reference snapshot) of the RPFS and the output pixel (second snapshot) of the LPSF is less than an error threshold. Thus, the low precision fragment shader (LPFS) is validated based on the difference/comparison. Since the vertex shader outputs to the rasterizer, which outputs to the fragment shader, the validation of the LPFS also validates the output of the vertex shader (validating the output of the first shader based on a comparison of the second snapshot with the second reference snapshot). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Krutsch by adding the feature of generating, at a second shader of the sequence of shaders, a second snapshot representing an output of the second shader based on a frame; and validating the output of the second shader based on a comparison of the second snapshot with a second reference snapshot, in order to determine if the LPFS may be used without substantially reducing the fidelity of the image which leads to a lower cost computation, as taught by Lee ([0006]).
Re: claims 6 and 18 (which are rejected under the same rationale), Krutsch, Chao and Lee teach
6. (Original) The method of claim 5, further comprising: selecting the first shader and the second shader to store the first and second snapshots based on programmable snapshot control information. (“The high-level shader programs transmitted by the application program 141 may include at least one of a high-level vertex shader program, a high-level geometry shader program and a high-level fragment shader program... For example, compiler/linker 210 translates the high-level shader programs designated for different domains (e.g., the high-level vertex shader program, the high-level geometry shader program, and the high-level fragment shader program), which are written in high level shading language, into distinct compiled software objects in the form of assembly code. ”; Krutsch, [0032])
The high-level vertex shader program (first shader) and the high-level fragment shader program (second shader) are considered to include programmable snapshot control information.
(“The vertex shader unit 225 is a programmable execution unit that is configured to execute a machine code vertex shader program, transforming vertex data as specified by the vertex shader program... The vertex shader unit 225 outputs the transformed vertex data to the primitive assembler unit 225 [sic] and is further configured to output the transformed vertex data to a buffer, in particular a feedback buffer 162-3.”; Krutsch, [0036], [0041])
In the vertex shader domain, the vertex shader unit (first shader) is a programmable execution unit that is selected to transform vertex data as specified by the vertex shader program. Then the vertex shader unit outputs the transformed vertex data (first snapshot) to a buffer (selecting the first shader to store the first snapshot based on programable snapshot control information).
(“The fragment shader unit 245 is a programmable execution unit that is configured to execute machine code fragment shader programs to transform fragments received from rasterizer unit 245 as specified by the machine code fragment shader program. For example, the fragment shader unit 245 may be programmed to perform operations... to produce shaded fragments that are output to a raster operations unit 250... The raster operations unit 250 or a per-fragment operations unit optionally performs fixed-function computations... and outputs pixel data as processed graphics data for storage in a buffer in the GPU local memory 160, such as the frame buffer 161.”; Krutsch, [0040])
In the fragment shader domain, the fragment shader (second shader) is a programmable execution unit that is selected to produce shaded fragments that are output (second snapshot) to a raster operations unit, which then outputs pixel data for storage in a buffer. Thus, the fragment shader stores its output to a buffer via the raster operations unit (selecting the second shader to store the second snapshot based on programmable snapshot control information).
Re: claim 11, Krutsch and Chao are silent regarding generating, at a second shader, a second snapshot representing an output of the second shader; and validating the second snapshot based on a comparison of the second snapshot to a second reference snapshot, however, Lee teaches
11. (Original) The method of claim 1, further comprising: generating, at a second shader, a second snapshot representing an output of the second shader; (“Fragment shader 55a is a regular precision fragment shader (RPFS), and fragment shader 55b is a low precision fragment shader (LPFS), such that the precision of some instructions and registers in the RPFS 55a is higher than those in the LPFS 55b... The RPFS 55a may be used to generate an image with greater image fidelity than the LPFS 55b. For example, the RPFS 55a uses 32-bit floating-point registers and instructions to calculate the color of the pixel whereas the LPFS 55b uses 16-bit floating-point for the same registers and instructions. The RPFS 55a processes the pilot pixel 203 to produce a regular fidelity value for pixel 206. The LPFS 55b processes the pilot pixel 203 to produce a lower fidelity value for pixel 207.”; Lee, [0074], [0075], Fig. 3)
The low precision fragment shader (second shader) generates an image (generating a second snapshot representing an output of the second shader) and the regular precision fragment shader generates an image (second reference snapshot).
and validating the second snapshot based on a comparison of the second snapshot to a second reference snapshot. (“The regular fidelity pixel value 206 is compared to the lower fidelity pixel value 207 to determine whether the LPFS 55b would produce an image of acceptable fidelity. The image fidelity may be acceptable if the difference between the two values is less than an error threshold.”; Lee, [0076], Fig. 4)
The output pixel of the LPFS is determined to have an acceptable fidelity (validating) if the difference (comparison) between the output pixel (second reference snapshot) of the RPFS and the output pixel (second snapshot) of the LPSF is less than an error threshold. Thus, the output of the low precision fragment shader (LPFS) is validated based on the difference/comparison (validating the second snapshot based on a comparison of the second snapshot to a second reference snapshot). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Krutsch by adding the feature of generating, at a second shader, a second snapshot representing an output of the second shader; and validating the second snapshot based on a comparison of the second snapshot to a second reference snapshot, in order to determine if the LPFS may be used without substantially reducing the fidelity of the image which leads to a lower cost computation, as taught by Lee ([0006]).
Allowable Subject Matter
Claims 3, 10, 15 and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. None of the prior art teaches or suggests:
From claims 3, 15 and 21 – “wherein the first processing unit comprises a graphics processing unit and the second processing unit comprises a central processing unit.”
From claim 10 – “and the first reference snapshot is generated with second shader code to perform the image processing operation, the second shader code different from the first shader code.”
As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a).
Response to Arguments
Applicant’s arguments, see Amendment/Request for Reconsideration-After Non-Final Rejection, filed 9/30/2025, with respect to the Objection to the Title have been fully considered and are persuasive. The Objection to the Title of the previous Office Action has been withdrawn.
Applicant’s arguments, see Amendment/Request for Reconsideration-After Non-Final Rejection, filed 9/30/2025, with respect to the Objection to the Specification have been fully considered and are persuasive. The Objection to the Specification of the previous Office Action has been withdrawn.
Applicant's arguments filed 9/30/2025 have been fully considered but they are not persuasive. Applicant argues:
“In the interest of advancing the present application to issuance, independent claims 1 and 13 have been amended to incorporate the subject matter of claims 2 and 14, respectively. Independent claim 12 has been similarly amended to recite the additional allowable subject matter represented in claim 14. As such, claims 1, 12, and 13, and all claims depending therefrom, are allowable for at least the reasons identified by the Office Action. Moreover, these claims recite additional novel and non-obvious features. Reconsideration and withdrawal of the anticipation rejections of claims 1, 7, 8, 12, 13, 19 and 20 therefore are respectfully requested.”
Allowability has been withdrawn and claims 1, 7, 8, 12, 13, 19 and 20 have been rejected. Please see the corresponding rejections.
Applicant's arguments filed 9/30/2025 have been fully considered but they are not persuasive. Applicant argues:
“In the interest of advancing the present application to issuance, independent claims 1, 12, and 13 have been amended to incorporate the subject matter of claims 2 and 14, respectively. Claims 1, 12, and 13, and all claims depending therefrom, thus are allowable for at least the reasons identified by the Office Action. Reconsideration and withdrawal of the above-referenced obviousness rejections of claims 4-6, 9, 11, and 16-18 therefore are respectfully requested.”
Allowability has been withdrawn and claims 1, 12, 13 and 4-6, 9, 11 and 16-18 have been rejected. Please see the corresponding rejections.
Applicant's arguments filed 9/30/2025 have been fully considered but they are not persuasive. Applicant argues:
“New claims 21 and 22 have been added. These claims depend from claim 12. Therefore, new claims 21 and 22 are novel and non-obvious in view of the cited art for at least the same reasons as claim 12. Entry thereof is therefore respectfully requested.”
Claims 21 includes allowable subject matter and claim 22 has been rejected.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA J RICKS whose telephone number is (571)270-7532. The examiner can normally be reached on M-F 7:30am-5pm EST (alternate Fridays off).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Donna J. Ricks/Examiner, Art Unit 2612
/DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618