DETAILED ACTION
Claims 1, 4, 5, 8, 10, 13, 14, 17, 19, and 21 are amended. Claims 1-22 are pending in the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
Amendments to the abstract of the disclosure are fully considered and are satisfactory to overcome the objections directed to the specification in the previous Office Action.
Amendments to claim 1 are fully considered and are satisfactory to overcome the objections directed to claims 1-9 in the previous Office Action.
Amendments to claim 4 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. §112(b) directed to claim 4 in the previous Office Action.
Amendments to claims 1 and 10 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. §101 directed to claims 1-18 in the previous Office Action.
Specification
The disclosure is objected to because of the following informalities:
In the abstract: “Application Protocol Interface” should have been –Application Programming Interface—.
Appropriate corrections are required. Applicant is advised to review the entire disclosure for further needed corrections.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With respect to claim 19: Claim is directed to an abstract idea without significantly more because:
Step 2A, Prong 1: The limitations “monitor a command stream that includes function calls to be carried out by a compute service or a second processing resource remote from the first processing resource related to an application”, “after identifying a sequence of a plurality of the function calls that represents a batch and satisfies a set of one or more criteria, creating a templatized version of the batch having a symbolic name and including placeholders for at least a subset of variable arguments of the plurality of function calls”, and “after observing a subsequent occurrence of the sequence within the command stream” as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the function through observation, evaluation judgment and /or opinion, or even with the aid of pen and paper, such as observing stream information, evaluating function call information, and generating template information. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. See MPEP §2106.04(a)(2).
Step 2A, Prong 2: This judicial exception is not integrated into a practical application.
The additional elements “A system comprising: a first processing resource; and instructions, which when executed by the first processing resource cause the first processing resource to” and “the function calls are associated with a transactional application programming interface (API) protocol” are recited at a high-level of generality such that they amount no more than mere instructions to apply the exception using generic computer, and/or mere computer components. See MPEP 2106.05(f).
Furthermore, “reduce an amount of data transmitted over an interconnect between the application and the compute service or the second processing resource” and “transmitting via the interconnect the symbolic name and values for the subset of variable arguments” do nothing more than add insignificant extra solution activity to the judicial exception of merely gathering data. See MPEP §2106.05(g).
Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “A system comprising: a first processing resource; and instructions, which when executed by the first processing resource cause the first processing resource to” and “the function calls are associated with a transactional application programming interface (API) protocol” amount to no more than mere instructions, or generic computer/computer components to carry out the exception. See MPEP 2106.05(f).
Furthermore, regarding the limitations “reduce an amount of data transmitted over an interconnect between the application and the compute service or the second processing resource” and “transmitting via the interconnect the symbolic name and values for the subset of variable arguments”, the courts have identified mere data gathering is well-understood, routine and conventional activity. See MPEP 2106.05(d).
The recitation of generic computer instructions and computer components to apply the judicial exception, and mere data gathering do not amount to significantly more, thus, cannot provide an inventive concept. Accordingly, the claims are not patent eligible under 35 USC 101.
With respect to claims 20-22: Claims 20-22 are not patent eligible under 35 USC 101 because the limitations recited in therewith amount to no more than mere instructions, or generic computer/computer components to carry out the exception. See MPEP 2106.05(f). As such, claims 20-22 are also not patent eligible under 35 USC 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-6, 10-15, and 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over Luo et al. (US 2019/0042316 A1; hereinafter Luo) in view of Foote et al. (US 2022/0334845 A1; hereinafter Foote).
With respect to claim 1, Luo teaches: A non-transitory machine-readable medium storing instructions, which when executed by a processing resource of a computer system cause the processing resource to (see e.g. Luo, paragraphs 100, 129):
monitor, at an application platform (see e.g. Luo, paragraph 30: “media middleware 111 may include any suitable application”) representing a first computer system or a first compute resource (see e.g. Fig. 1: “CPU 101”) of a computer system (see e.g. Luo, Fig. 1: “100”; paragraph 26: “system 100 may include a central processing unit 101”) on which an application is executed (see e.g. Luo, paragraph 2: “each media task, such as an encode task or the like, from middleware or another software application”; paragraph 30: “application or the like for providing function call 142”; and paragraph 30 “media middleware 111 may include any suitable application or the like for providing function call 142”), a command stream that includes function calls (see e.g. Luo, paragraph 1: “upper level application program interface (API) calls to GPU commands”; and paragraph 30: “application or the like for providing function call 142”) to be carried out by an executer (see e.g. Luo, Fig. 1: “Media Task Processor 121”; and paragraph 38: “Media task processors 121 may launch and perform media tasks associated with each media task”; and paragraph 70: “one or more of the multiple media tasks may be performed by media task processors 121”) of a server platform (see e.g. Luo, paragraph 24: “media server applications for continuous decoding and encoding (e.g., transcoding) tasks. For example, in continuous decoding and encoding implementations such techniques may be advantageously applied on server devices”; and paragraph 27: “a server platform”), representing a second computer system or a second compute resource (see e.g. Luo, Fig. 1: “GPU 102”) of the computer system (see e.g. Luo, paragraph 26: “system 100 may include… a graphics processing unit 102”), related to the application (see e.g. Luo, paragraph 1: “upper level application program interface (API) calls to GPU commands”; paragraph 2: “each media task, such as an encode task or the like, from middleware or another software application (e.g., via the discussed API calls)”),
reduce an amount of data transmitted over an interconnect (see e.g. Luo, paragraph 1: “a central processing unit (CPU) may communicate with a graphics processing unit (GPU)”; paragraph 44: “Using such a media command template structure, media driver 114 may launch a batch or set of tasks with a single parameter parsing, command assembly, and submission. Such batching techniques may improve central processing unit latency performance. For example, latency associated with media driver 114 may be reduced, for a batch depth of N, up to (N−1)/N (e.g., 75% for a batch depth of four, 87.5% for a batch depth of eight, and so on)”), representing (i) a network coupling the first computer system in communication with the second computer system or (ii) a bus coupling the first compute resource in communication with the second compute resource (see e.g. Luo, paragraph 1: “a central processing unit (CPU) may communicate with a graphics processing unit (GPU)”; and Fig. 1, 5, 7), between the application and the executer (see e.g. Luo, paragraph 2: “each media task, such as an encode task or the like, from middleware or another software application (e.g., via the discussed API calls)”; paragraph 38: “Media task processors 121 may launch and perform media tasks associated with each media task”; and Fig. 1) by:
Since Luo discloses CPU 101 and GPU 102 within the system 100 communicating with each other (see e.g. Luo, paragraph 1; Fig. 1, 5, 7), Luo inherently discloses a bus connection between the CPU 101 and GPU 102.
during monitoring of the command stream (see e.g. Luo, paragraph 45: “process 300 for batching media tasks”; and Fig. 3):
identifying a plurality of the function calls that represents a batch (see e.g. Luo, paragraph 20: “in a media task batching mode, multiple media tasks may be combined into one function call including the media tasks”; and paragraph 29: “receive media tasks 144 and media middleware 111 may generate a single function call (FC) 142 based on some or all of media tasks 144 responsive to batch signal (BS)”) and satisfies a set of one or more criteria (see e.g. Luo, paragraph 29: “Batch signal 141 may be generated by media middleware 111, for example, based on a usage scenario (e.g. a media processing task being performed) and/or based on other criteria such as a low power mode of system 100, a computing resource usage level of central processing unit 101 and/or graphics processing unit 102, or the like”); and
creating a template of the batch (see e.g. Luo, paragraph 55: “FIG. 4 illustrates an example batched media command template 400”; and Fig. 4) having a symbolic name (see e.g. Luo, paragraph 55: “batched media command template 400”) and including placeholders (see e.g. Luo, paragraph 56: “pointers 405”; and Fig. 4: “405”) for at least a subset of variable arguments (see e.g. Luo, paragraph 56: “task specific parameters 406, 407, 408”; and Fig. 4: “406-408”) of the plurality of function calls (see e.g. Luo, paragraph 56: “pointers 405 to task specific parameters 406, 407, 408. For example, task specific parameters 406 may be associated with a first task of the media tasks, task specific parameters 407 may be associated with a second task of the media tasks, task specific parameters 408 may be associated with a third task of the media tasks”); and
causing the executer to execute the plurality of function calls (see e.g. Luo, paragraph 38: “Media task processors 121 may launch and perform media tasks associated with each media task””) by transmitting via the interconnect the symbolic name and values for the subset of variable arguments (see e.g. Luo, paragraph 21: “Based on the single function call and a batched media command template, a single batched media command set may be generated. For example, a media driver may parse the single function call and assemble the single batched media command set based on the batched media command template. The single batched media command set may include a first portion (e.g., based on a command template base) including commands and/or parameters corresponding to all of the media tasks…The second portion (e.g., based on a task specific parameters portion) may include task dependent parameters for particular media tasks or pointers to such task dependent parameters”; and paragraph 22: “single batched media command set may be submitted to and/or retrieved by a graphics processor and the graphics processor may perform the multiple media tasks based on the single batched media command set to generate output media data”).
Luo does not but Foote teaches:
wherein the function calls are associated with a transactional application programming interface (API) protocol (see e.g. Foote, paragraph 52: “API 110 can be a CUDA API from NVIDIA (e.g., see FIG. 2). For example, a graphics processing program or a math library application running on CPU 102 can submit several requests to API 110 to perform operations using GPU 120 to accelerate a processing of several operations (e.g., convolution, Fast Fourier Transforms, general matrix math operations such as matrix multiplication including sparse matrices); API 110 communicates with driver 115 to prepare graphics kernels to perform such operations”); and
Luo and Foote are analogous art because they are in the same field of endeavor: handling API operations directed to GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Luo with the teachings of Foote. The motivation/suggestion would be to improve communication efficiency between the CPU and the GPU (see Foote, paragraph 52).
With respect to claim 2, Luo as modified teaches: The non-transitory machine-readable medium of claim 1, wherein the values comprise immediate values (see e.g. Luo, paragraph 35: “a quantization parameter for each frame of the media tasks may be provided in the task specific parameters portion”; and paragraph 42: “high level parameters received via function call”) or global memory references.
With respect to claim 3, Luo as modified teaches: The non-transitory machine-readable medium of claim 1, wherein the set of one or more criteria includes a frequency threshold over a period of time (see e.g. Luo, paragraph 40: “determine whether or not to apply batch processing based on… usage rate of the central processing unit, graphics processing unit, memory, or the like”).
With respect to claim 4, Luo as modified teaches: The non-transitory machine-readable medium of claim 1, wherein the instructions further cause the processing resource to:
evaluate historical usage (see e.g. Luo, paragraph 29: “Batch signal 141 may be generated by media middleware 111, for example, based on a usage scenario (e.g. a media processing task being performed)”)
after determining the batch no longer satisfies a criterion of the one or more criteria, remove the template of the batch (see e.g. Luo, paragraph 29: “In usage scenarios where batching may be disadvantageous, batch signal 141 may indicate no batching or may not be asserted or the like”).
Luo does not but Foote teaches:
of the transactional API protocol (see e.g. Foote, paragraph 52: “API 110 can be a CUDA API from NVIDIA (e.g., see FIG. 2). For example, a graphics processing program or a math library application running on CPU 102 can submit several requests to API 110 to perform operations using GPU 120 to accelerate a processing of several operations (e.g., convolution, Fast Fourier Transforms, general matrix math operations such as matrix multiplication including sparse matrices); API 110 communicates with driver 115 to prepare graphics kernels to perform such operations”); and
Luo and Foote are analogous art because they are in the same field of endeavor: handling API operations directed to GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Luo with the teachings of Foote. The motivation/suggestion would be to improve communication efficiency between the CPU and the GPU (see Foote, paragraph 52).
With respect to claim 5, Luo as modified teaches: The non-transitory machine-readable medium of claim 1, wherein the instructions further case the processing resource to:
track historical usage (see e.g. Luo, paragraph 29: “Batch signal 141 may be generated by media middleware 111, for example, based on a usage scenario (e.g. a media processing task being performed)”)
based on the historical usage, identify a plurality of distinct usage patterns associated with deterministic and predictable changes in use (see e.g. Luo, paragraph 29: “Batch signal 141 may be generated by media middleware 111, for example, based on a usage scenario (e.g. a media processing task being performed) and/or based on other criteria such as a low power mode of system 100, a computing resource usage level of central processing unit 101 and/or graphics processing unit 102, or the like”; and paragraph 40: “determine whether or not to apply batch processing based on the particular usage scenario (e.g., application being processed) and/or other criteria such as the usage rate of the central processing unit, graphics processing unit, memory, or the like”)
Luo does not but Foote teaches:
of the transactional API protocol (see e.g. Foote, paragraph 52: “API 110 can be a CUDA API from NVIDIA (e.g., see FIG. 2). For example, a graphics processing program or a math library application running on CPU 102 can submit several requests to API 110 to perform operations using GPU 120 to accelerate a processing of several operations (e.g., convolution, Fast Fourier Transforms, general matrix math operations such as matrix multiplication including sparse matrices); API 110 communicates with driver 115 to prepare graphics kernels to perform such operations”); and
of the transactional API protocol (see e.g. Foote, paragraph 52: “API 110 can be a CUDA API from NVIDIA (e.g., see FIG. 2). For example, a graphics processing program or a math library application running on CPU 102 can submit several requests to API 110 to perform operations using GPU 120 to accelerate a processing of several operations (e.g., convolution, Fast Fourier Transforms, general matrix math operations such as matrix multiplication including sparse matrices); API 110 communicates with driver 115 to prepare graphics kernels to perform such operations”).
Luo and Foote are analogous art because they are in the same field of endeavor: handling API operations directed to GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Luo with the teachings of Foote. The motivation/suggestion would be to improve communication efficiency between the CPU and the GPU (see Foote, paragraph 52).
With respect to claim 6, Luo as modified teaches: The non-transitory machine-readable medium of claim 5,
Luo does not but Khmelev teaches:
wherein the plurality of distinct usage patterns are associated with respective times of day (see e.g. Khmelev, column 9, lines 14-20: “one or more contextual factors, such as… the usage history of the end-user for this API, the time of day”).
Luo and Khmelev are analogous art because they are in the same field of endeavor: handling API operations. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Luo with the teachings of Khmelev. The motivation/suggestion would be to improve support for various business and/or organizations that might utilize the API (see e.g. Khmelev, column 1, lines 39-50; thus increasing the overall extensibility of the system.
With respect to claims 10-15: Claims 10-15 are directed to a method corresponding to the active functions implemented by executing the instructions stored in the machine-readable medium disclosed in claims 1-6, respectively; please see the rejections directed to claims 1-6 above which covers the limitations recited in claims 10-15.
With respect to claim 19: Claim 19 is directed to a system comprising a first processing resource and instructions, which when executed by the first processing resource cause the first processing resource to implement active functions which are cover by the functions implemented by execution of the instructions stored in the machine-readable medium disclosed in claims 1; please see the rejection directed to claim 1 above which also covers the limitations recited in claim 19. Note that, Luo also discloses a system 802 including a processor 810 and instructions (see e.g. Luo, Fig. 8) to implement functions corresponding to the functions implemented by execution of the instructions stored in the machine-readable medium disclosed in claims 1.
With respect to claim 20, Luo as modified teaches: The system of claim 19, wherein the first processing resource comprises a central processing unit (CPU) (see e.g. Luo, paragraph 79: “Processor 810 may be implemented as a… central processing unit (CPU)”), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), or a field-programmable gate array (FPGA) of a first computer system.
With respect to claim 21, Luo as modified teaches: The system of claim 19, wherein the second processing resource comprises a CPU, a GPU (see e.g. Luo, paragraph 82: “Graphics subsystem 815 may be a graphics processing unit (GPU)”), an ASIC, or an FPGA of a second computer system.
With respect to claim 22, Luo as modified teaches: The system of claim 20, wherein the second processing resource comprises a second CPU, a second GPU (see e.g. Luo, paragraph 82: “Graphics subsystem 815 may perform processing of images such as still or video for display. Graphics subsystem 815 may be a graphics processing unit (GPU) or a visual processing unit (VPU)”), a second ASIC, or a second FPGA of the first computer system (see e.g. Luo, Fig. 8).
Response to Arguments
Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive. In detail:
(i) Regarding Applicant’s arguments with respect to the rejections under 35 U.S.C. §101 (Remarks, pages 11-15), the Examiner initially notes that amendments made to claim 1 are not present in claim 19.
Accordingly, the limitations “after identifying a sequence of a plurality of the function calls that represents a batch and satisfies a set of one or more criteria, creating a templatized version of the batch having a symbolic name and including placeholders for at least a subset of variable arguments of the plurality of function calls” and “after observing a subsequent occurrence of the sequence within the command stream” recited in claim 19 can be performed mentally by evaluating function call information in accordance with criteria and creating a templatized batch information with particular variables.
Furthermore, the limitations “reduce an amount of data transmitted over an interconnect between the application and the compute service or the second processing resource” and “transmitting via the interconnect the symbolic name and values for the subset of variable arguments” recited in claim 19 go no further than merely transmitting such information. The courts have identified such data transmission operations as being well-understood, routine, conventional and/or insignificant extra-solution activities that fail to integrate the judicial exception into a practical application in a meaningful manner. Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350. See MPEP §2106.05(d).
Consequently, the rejections under 35 USC §101 directed to claims 19-22 are maintained. For more details, please see the corresponding rejections above.
(ii) Regarding Applicant’s arguments with respect to the rejections under 35 U.S.C. §103 directed to claim 1 (Remarks, pages 15-20), note that each media task corresponds to a media processing function implemented by the GPU, such as a video encoding task corresponds to a video encoding function implemented by the GPU, a bitstream decoding task corresponds to a bitstream decoding function implemented by the GPU.
More specifically, Luo discloses an application executing on CPU 101 sending media tasks to a GPU as GPU commands (i.e. a command stream), wherein the media tasks (i.e. the media task functions) are batched into a single function call for sending to the GPU to reduce the data transmission amounts from the CPU to the GPU (see e.g. Luo, paragraph 1: “upper level application program interface (API) calls to GPU commands”; paragraph 2: “each media task, such as an encode task or the like, from middleware or another software application (e.g., via the discussed API calls)”; paragraph 44: “Using such a media command template structure, media driver 114 may launch a batch or set of tasks with a single parameter parsing, command assembly, and submission. Such batching techniques may improve central processing unit latency performance. For example, latency associated with media driver 114 may be reduced, for a batch depth of N, up to (N−1)/N (e.g., 75% for a batch depth of four, 87.5% for a batch depth of eight, and so on)”).
That is, Luo discloses the features of monitoring a command stream that includes media tasks (i.e. media task functions) directed to the GPU and identifying the media tasks for batching into a single function call as a command to the GPU.
Luo further discloses utilizing a batch media command template 400 with pointers 405 to parameters of the media tasks to form the batch of media tasks (see e.g. Luo, paragraph 56: “pointers 405 to task specific parameters 406, 407, 408. For example, task specific parameters 406 may be associated with a first task of the media tasks, task specific parameters 407 may be associated with a second task of the media tasks, task specific parameters 408 may be associated with a third task of the media tasks”).
Consequently, Luo discloses the limitations of “monitor”, “reduce an amount of data transmitted”, “identifying a plurality of the function calls that represents a batch”, and “creating a template batch” as recited in claim 1. For more details, please see the rejections directed to claim 1 above.
(ii) Regarding Applicant’s arguments with respect to claim 3 (Remarks, page 20), in view of the above discussion (i), note that Luo does disclose media tasks corresponding to media functions. Luo further discloses usage rate of the CPU, GPU, and memory (i.e. usage frequency) as a criteria for the batching process (see e.g. Luo, paragraph 40: “determine whether or not to apply batch processing based on… usage rate of the central processing unit, graphics processing unit, memory, or the like”).
As such, the rejections directed to claim 3 are maintained. For more details, please see the rejection directed to claim 3 above.
Applicant’s arguments with respect to the limitations “function calls are associated with a transactional application programming interface (API) protocol” recited in claims 1, 10, 19, and the related limitations recited in claim 4, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Allowable Subject Matter
Claims 7-9 and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
The prior art references do not explicitly disclose maintaining a first profile for a first distinct usage pattern of the plurality of distinct usage patterns, the first profile including a first batch space including a plurality of templatized versions of batches created during the first distinct usage pattern; and maintaining a second profile for a second distinct usage pattern of the plurality of distinct usage patterns, the second profile including a second batch space including a plurality of templatized versions of batches created during the second distinct usage pattern as recited in claims 7 and 16.
CONCLUSION
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Gurfinkel et al. (US 2023/0005096 A1) discloses utilizing various transactional API protocols, such as OpenCL, oneAPI, CUDA, etc., for implementing communications between various computing resources (see paragraphs 51, 54, 383).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Umut Onat whose telephone number is (571)270-1735. The examiner can normally be reached M-Th 9:00-7:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin L Young can be reached at (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UMUT ONAT/Primary Examiner, Art Unit 2194