Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
The following action is in response to the communication(s) received on 01/30/2026.
As of the claims filed 01/30/2026:
Claims 1-4, 6-14, 16-18, and 20 have been amended.
Claims 21 and 22 have been added.
Claims 5 and 15 have been canceled.
Claims 1-4, 6-14, and 16-22 are now pending.
Claims 1, 11, and 20 are independent claims.
Response to Arguments
Applicant’s arguments filed 01/30/2026 have been fully considered, but are not fully persuasive.
The amended limitations have been considered for eligibility and are persuasive and thus have been withdrawn.
Applicant arguments regarding the amended limitations have been considered for novelty and non-obviousness, but are unpersuasive.
Applicant asserts that Patton does not teach that a … neuromorphic compute platform …; a plurality of … compute resources …; and a plurality of … memory circuits are hardware-based (p.9 last ¶ 1st sentence). This argument has been considered, but is moot in view of the new art rejection from Schuman via Patton/Schuman/Shukla (Schuman [p.4 §3.1][p.5 fig.2] [p.6 2nd ¶]), where the method is performed via the μCaspian, which is an FPGA in a physical evaluation platform, thus corresponding to hardware-based neuromorphic compute platform, plurality of compute resources, and plurality of memory circuits.
Applicant further asserts that Patton does not teach that the sensor is integrated with the compute platform (p.9-10). This argument has been considered but is moot in view of the new art rejection in view of Shukla [0033], where the unitary subassembly structure corresponds to the chip integrating the sensor and the compute platform and thus configured as the neurosynaptic core.
Applicant further asserts that Patton does not teach a result of processing the sensor data with the hardware-based neuromorphic compute platform is provided to a local computing device of an autonomous vehicle for use in navigating the autonomous vehicle, wherein the hardware-based neuromorphic compute platform is implemented on the autonomous vehicle. (p.10 ¶2) This argument has been considered, but is moot in view of the new art rejection from Schuman, via Patton/Schuman/Shukla (Schuman [p.4 §3.1][p.5 fig.2] [p.6 2nd ¶]), where interacting with the car for real-world evaluation corresponds to providing communication for navigating the autonomous vehicle; μCaspian on the physical vehicle corresponds to the compute platform.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 7-14, and 17-22 are rejected under 35 U.S.C. 103 as being unpatentable over Patton et al., “Neuromorphic Computing for Autonomous Racing” (hereinafter Patton), in view of Schuman et al., "Evolutionary vs imitation learning for neuromorphic control at the edge" (hereinafter Schuman), further in view of Shukla et al., US 20160070414 A1 (hereinafter Shukla).
Regarding Claim 1, Patton teaches:
An integrated sensor system comprising: a sensor configured to collect sensor data; (Patton [p.3 left ¶3] The simulator provides a variety of state information about the vehicle on the track. We use the LIDAR sensor as the observation, and we utilize information about distance traveled, collisions, and laps completed as part of our fitness evaluation.)
a … neuromorphic compute platform comprising processing circuitry collocated with memory circuitry and communication channels interconnecting the processing circuitry and the memory circuitry… (Patton [abstract] We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
[p.2 right ¶2] In this work, we focus on training and testing in simulation; with that, we utilize µCaspian’s hardware accurate software simulator, written in C++.) (Note: the neuromorphic hardware, software, and training corresponds to the neuromorphic compute platform; using neuromorphic software requires hardware comprising memory circuitry)
…wherein the neurosynaptic core comprises: a plurality of … compute resources that are each configured to implement a respective neuron of a plurality of neurons of a neural network trained to process the sensor data; and a plurality of … memory circuits that are each configured to implement a respective synapse of a plurality of synapses of the neural network with a respective on-chip communication channel (Patton [p.3 left ¶3] We down-sample to 10 equally spaced LIDAR beams in this field of view, and those values are given as input to the SNN… The SNN can select between three speed values (2, 5, 10) and 13 possible angle values (0, -0.01, 0.01, -0.03, 0.03, -0.05, 0.05, -0.07, 0.07, -0.1, 0.1, -0.3, 0.3). Whichever neuron spikes the most in the set of speed neurons (steering angle neurons) is selected as the output speed (steering angle).)
[abstract] We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
[p.2 right ¶2] In this work, we focus on training and testing in simulation; with that, we utilize µCaspian’s hardware accurate software simulator, written in C++.) (Note: performing the training and testing requires executing software on hardware, corresponding to interconnecting communication between the processing circuitry and the memory circuitry; the output speed is selected within the SNN, which is on-chip of neuromorphic hardware, and thus corresponds to the implementation of the synapse with a respective on-chip communication channel)
Patton does not teach, but Schuman further teaches:
a … neuromorphic compute platform …; a plurality of … compute resources …; and a plurality of … memory circuits are hardware-based (Schuman [p.4 §3.1] The application and evaluation platform used in this work comes from the F1TENTH community. The F1TENTH community provides a suite of resources for a 1/10th scale Formula One competition, including specifications for the physical car, instructions for assembling and running the hardware and software, software for interacting with the car, and simulation software. In this work, we use both the F1TENTH Open AI gym environment for training and testing, as well as the physical F1TENTH car for real-world evaluation. The physical car and its components are shown in figure 2.
[p.5 fig.2]
PNG
media_image1.png
491
760
media_image1.png
Greyscale
[p.6 2nd ¶] The μCaspian architecture is implemented on a very small low cost FPGA…) (Note: the method is performed via the μCaspian, which is an FPGA in a physical evaluation platform, thus corresponding to hardware-based neuromorphic compute platform, plurality of compute resources, and plurality of memory circuits)
Schuman and Patton are analogous to the present invention because both are from the same field of endeavor of neuromorphic hardware implementations of autonomous vehicles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the physical evaluation platform from Schuman into Patton’s neuromorphic computing method for autonomous racing. The motivation would be to “We evaluate these algorithmic approaches and the best performing spiking neural networks that result from these approaches both in simulation and on the physical car on a physical racetrack.” (Schuman, p.9 last ¶).
Patton/Schuman does not teach, but Shukla further teaches:
wherein the hardware-based neuromorphic compute platform is integrated together with the sensor on a chip and configured as a neurosynaptic core
(Shukla [0033] In general, subassembly 64 may be any suitable assembly for incorporation into an electronic device. Subassemblies may be formed by mounting one or more electrical components (e.g., one component, two components, three components, four components, or five or more components) to one or more printed circuits and/or to other supporting structures to form a unitary subassembly structure. Examples of electrical components that may be included in a subassembly and which may be calibrated include input-output components such as … cameras, sensors, … light sensors such as ambient light sensors and other light sensors, motion sensors (accelerometers), capacitance sensors, capacitive proximity sensors, light-based proximity sensors, other proximity sensors, touch sensors, …storage components such as hard disk drive storage and other memory, integrated circuits such as one or more microprocessors, microcontrollers, digital signal processors, baseband processors …, and other electrical components or combinations of any two or more of these components. These electrical components may be mounted in any combination (one or more of these components, two or more of these components, three or more of these components, etc.) to form any suitable type of subassembly 64 for device 10.) (Note: the unitary subassembly structure corresponds to the chip integrating the sensor and the compute platform and thus configured as the neurosynaptic core)
Shukla and Patton/Schuman are analogous to the present invention because both are from the same field of endeavor of circuitry assemblies for embedded systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the subassembly method from Shukla into Patton/Schuman’s neuromorphic computing method for autonomous racing. The motivation would be to “Calibrated subassemblies may be assembled together to form finished devices 10. Some calibrated subassemblies may be used as backups and may be retained for use in repairing devices that are accidentally damaged during use.” (Shukla [0036]).
Regarding Claim 2, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Patton, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 1, wherein the processing circuitry comprises at least one of a graphics processing unit (GPU), a central processing unit (CPU), a digital signal processor (DSP), an image signal processor (ISP), a field-programmable gate array (FPGA), a neural network processor (NNP), or a compute circuit. (Patton [p.2 right ¶1] The µCaspian development board includes a Lattice iCE40 UP5K1 FPGA on which the neuromorphic architecture is implemented.)
Regarding Claim 3, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Patton, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 1, wherein the memory circuitry comprises at least one of a volatile memory, a memristor, or a crossbar array implemented by at least one of a volatile memory device or a memory circuit. (Patton [abstract] We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
[p.2 right ¶2] In this work, we focus on training and testing in simulation; with that, we utilize µCaspian’s hardware accurate software simulator, written in C++.) (Note: using neuromorphic software requires hardware comprising a volatile memory and a memory circuit)
Regarding Claim 4, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Patton, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 1, wherein the sensor comprises at least one of a light detection and ranging (LIDAR) sensor, a camera sensor, a radio detection and ranging (RADAR) sensor, or an ultrasonic sensor. (Patton [p.3 left ¶3] The simulator provides a variety of state information about the vehicle on the track. We use the LIDAR sensor as the observation, and we utilize information about distance traveled, collisions, and laps completed as part of our fitness evaluation.)
Regarding Claim 7, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Patton, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 1, wherein the neural network comprises an artificial neural network, and wherein the artificial neural network is configured to perform perception sensing using the sensor data. (Patton [p.1 right ¶2] Neuromorphic computers are brain-inspired in their hardware implementation. They natively perform neural network-style computation and can do so typically with very low power.
[p.3 left ¶3] The simulator provides a variety of state information about the vehicle on the track. We use the LIDAR sensor as the observation, and we utilize information about distance traveled, collisions, and laps completed as part of our fitness evaluation. The LIDAR sensor provides 1080 individual beam values, sampled from a 270 degree forward-facing view of the car. We down-sample to 10 equally spaced LIDAR beams in this field of view, and those values are given as input to the SNN.) (Note: the SNN is a type of artificial neural network)
Regarding Claim 8, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 7. Patton, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 7, wherein the perception sensing comprises at least one of object detection, object tracking, depth or distance detection, localization, scene mapping, path planning, decision making, or one or more autonomous vehicle operations. (Patton [p.2 right last ¶] For training and designing an SNN for this task, we use Evolutionary Optimization for Neuromorphic Systems (EONS)… n. Because EONS relies entirely on a fitness evaluation score to drive the optimization, it is extremely flexible to apply to different types of applications, including classification, control, and anomaly detection.) (Note: classification and anomaly detection correspond to object detection; control corresponds to path planning)
Regarding Claim 9, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 7. Patton, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 7, wherein the artificial neural network comprises a spiking neural network. (Patton [p.1 right ¶2] Neuromorphic computers are brain-inspired in their hardware implementation. They natively perform neural network-style computation and can do so typically with very low power.
[p.3 left ¶3] The simulator provides a variety of state information about the vehicle on the track. We use the LIDAR sensor as the observation, and we utilize information about distance traveled, collisions, and laps completed as part of our fitness evaluation. The LIDAR sensor provides 1080 individual beam values, sampled from a 270 degree forward-facing view of the car. We down-sample to 10 equally spaced LIDAR beams in this field of view, and those values are given as input to the SNN.) (Note: the SNN is a type of artificial neural network)
Regarding Claim 10, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Patton, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 1, wherein the integrated sensor system comprises a sensor system on an autonomous vehicle. (Patton [Abstract] We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
[p.3 Fig.1]
PNG
media_image2.png
331
437
media_image2.png
Greyscale
)
Regarding Claim 21, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Schuman, via Patton/Schuman/Shukla, further teaches:
The integrated sensor system of claim 1, wherein a result of processing the sensor data with the hardware-based neuromorphic compute platform is provided to a local computing device of an autonomous vehicle for use in navigating the autonomous vehicle, wherein the hardware-based neuromorphic compute platform is implemented on the autonomous vehicle. (Schuman [p.4 §3.1] The application and evaluation platform used in this work comes from the F1TENTH community. The F1TENTH community provides a suite of resources for a 1/10th scale Formula One competition, including specifications for the physical car, instructions for assembling and running the hardware and software, software for interacting with the car, and simulation software. In this work, we use both the F1TENTH Open AI gym environment for training and testing, as well as the physical F1TENTH car for real-world evaluation. The physical car and its components are shown in figure 2.
[p.5 fig.2]
PNG
media_image1.png
491
760
media_image1.png
Greyscale
[p.6 2nd ¶] The μCaspian architecture is implemented on a very small low cost FPGA…) (Note: interacting with the car for real-world evaluation corresponds to providing communication for navigating the autonomous vehicle; μCaspian corresponds to the compute platform)
Regarding Claim 11, Patton teaches:
A method comprising: sending, from a sensor configured to collect sensor data to a hardware-based neuromorphic compute platform, the sensor data collected by the sensor, wherein the hardware-based neuromorphic compute platform comprises processing circuitry collocated with memory circuitry and communication channels interconnecting the processing circuitry and the memory circuitry; (Patton [p.3 left ¶3] The simulator provides a variety of state information about the vehicle on the track. We use the LIDAR sensor as the observation, and we utilize information about distance traveled, collisions, and laps completed as part of our fitness evaluation.
[abstract] We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
[p.2 right ¶2] In this work, we focus on training and testing in simulation; with that, we utilize µCaspian’s hardware accurate software simulator, written in C++.) (Note: the neuromorphic hardware, software, and training corresponds to the neuromorphic compute platform; using neuromorphic software requires hardware comprising memory circuitry; performing the training and testing requires executing software on hardware, corresponding to interconnecting communication between the processing circuitry and the memory circuitry)
and processing, using a neural network implemented by the hardware-based neuromorphic compute platform, the sensor data to generate an output (Patton [p.3 left 3rd ¶] The SNN can select between three speed values (2, 5, 10) and 13 possible angle values (0, -0.01, 0.01, -0.03, 0.03, -0.05, 0.05, -0.07, 0.07, -0.1, 0.1, -0.3, 0.3). Whichever neuron spikes the most in the set of speed neurons (steering angle neurons) is selected as the output speed (steering angle).)
wherein the neurosynaptic core comprises: a plurality of hardware-based compute resources that are each configured to implement a respective neuron of a plurality of neurons of a neural network trained to process the sensor data; and a plurality of hardware-based memory circuits that are each configured to implement a respective synapse of a plurality of synapses of the neural network with a respective on-chip communication channel(Patton [p.3 left ¶3] We down-sample to 10 equally spaced LIDAR beams in this field of view, and those values are given as input to the SNN… The SNN can select between three speed values (2, 5, 10) and 13 possible angle values (0, -0.01, 0.01, -0.03, 0.03, -0.05, 0.05, -0.07, 0.07, -0.1, 0.1, -0.3, 0.3). Whichever neuron spikes the most in the set of speed neurons (steering angle neurons) is selected as the output speed (steering angle).)
[abstract] We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
[p.2 right ¶2] In this work, we focus on training and testing in simulation; with that, we utilize µCaspian’s hardware accurate software simulator, written in C++.) (Note: each speed value and angle values correspond to each synapse; performing the training and testing requires executing software on hardware, corresponding to interconnecting communication between the processing circuitry and the memory circuitry; the output speed is selected within the SNN, which is on-chip of neuromorphic hardware, and thus corresponds to the implementation of the synapse with a respective on-chip communication channel)
Patton does not teach, but Shukla further teaches:
wherein the hardware-based neuromorphic compute platform is integrated together with the sensor on a chip and configured as a neurosynaptic core
(Shukla [0033] In general, subassembly 64 may be any suitable assembly for incorporation into an electronic device. Subassemblies may be formed by mounting one or more electrical components (e.g., one component, two components, three components, four components, or five or more components) to one or more printed circuits and/or to other supporting structures to form a unitary subassembly structure. Examples of electrical components that may be included in a subassembly and which may be calibrated include input-output components such as … cameras, sensors, … light sensors such as ambient light sensors and other light sensors, motion sensors (accelerometers), capacitance sensors, capacitive proximity sensors, light-based proximity sensors, other proximity sensors, touch sensors, …storage components such as hard disk drive storage and other memory, integrated circuits such as one or more microprocessors, microcontrollers, digital signal processors, baseband processors …, and other electrical components or combinations of any two or more of these components. These electrical components may be mounted in any combination (one or more of these components, two or more of these components, three or more of these components, etc.) to form any suitable type of subassembly 64 for device 10.) (Note: the unitary subassembly structure corresponds to the chip integrating the sensor and the compute platform and thus configured as the neurosynaptic core)
Shukla and Patton are analogous to the present invention because both are from the same field of endeavor of circuitry assemblies for embedded systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the subassembly method from Shukla into Patton’s neuromorphic computing method for autonomous racing. The motivation would be to “Calibrated subassemblies may be assembled together to form finished devices 10. Some calibrated subassemblies may be used as backups and may be retained for use in repairing devices that are accidentally damaged during use.” (Shukla [0036]).
Claims 12-14, 17-19, and 22, dependent on Claim 11, recite identical additional methods of Claims 2-4, 7-9, and 21, respectively. Thus, Claims 12-14, 17-19, and 22 are rejected for reasons set forth in Claims 2-4, 7-9, and 21, respectively.
Independent Claim 20 recites An autonomous vehicle comprising: a local computing device; and a sensor system communicatively coupled to the local computing device, the sensor system comprising (Patton [p.3 left ¶3] The simulator provides a variety of state information about the vehicle on the track. We use the LIDAR sensor as the observation, and we utilize information about distance traveled, collisions, and laps completed as part of our fitness evaluation.
[abstract] We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
[p.2 right ¶2] In this work, we focus on training and testing in simulation; with that, we utilize µCaspian’s hardware accurate software simulator, written in C++.) (Note: the neuromorphic hardware, software, and training corresponds to the local computing device) to perform precisely the methods of Claim 1. Thus, Claim 20 is rejected for reasons set forth in Claim 1.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Patton/Schuman/Shukla in view of Mitchell et al., "A Small, Low Cost Event-Driven Architecture for Spiking Neural Networks on FPGAs" (hereinafter Mitchell).
Regarding Claim 6, Patton/Schuman/Shukla respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Patton/Schuman/Shukla does not explicitly teach, but Mitchell further teaches:
The integrated sensor system of claim 1, wherein the neuromorphic compute platform is configured to: receive a spike event generated by a first neuron of the plurality of neurons; store the spike event in a synapse of the plurality of synapses; send the spike event to a second neuron of the plurality of neurons; and perform, by the second neuron, a spike computation. (Mitchell [p.2 right 4th ¶] The neuron provides long term storage of charge. Every time step, charge from one of the dendrite buffers is flushed. Each neuron’s charge value is updated as necessary, and if the charge exceeds the configured threshold, the neuron will emit a spike to the axon and reset the charge value back to zero…
The axon serves to map spikes from neurons to the appropriate range of synapses. All output synapses for a given neuron are allocated to a contiguous range of synapse addresses.
[p.3 left ¶3] As such, the SNN will receive 10 LIDAR beams as input…
PNG
media_image3.png
303
322
media_image3.png
Greyscale
[p.3 right last ¶] Every neuron in the network is connected to each other neuron with a synapse of random weight.) (Note: updating the charge value and determining the threshold satisfaction method corresponds to performing a spike computation; sending the spike dispatch to the other neuron corresponds to sending the spike event to a second neuron of a plurality of neurons)
Mitchell and Patton are analogous to the present invention because both are from the same field of endeavor of spiking neural network systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the spiking neuron network storage, updating, and spike emission method from Mitchell into Patton’s neuromorphic computing method for autonomous racing. The motivation would be to “This particular platform was developed with several different use cases in mind. First, because of its size and energy usage, it is well-suited for edge deployment applications. Second, because it is inexpensive and has an associated user-friendly software development system in Python for programming the system, it is also amenable for educational purposes.” (Mitchell [p.1 right ¶2]).
Claim 16, dependent on Claim 11, also recite the system configured to perform precisely the methods of Claim 6. Thus, Claims 16 is rejected for reasons set forth in Claim 6.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.H./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122