Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
Acknowledgment is made of the Information Disclosure Statement dated 09/05/2025. All of the cited references have been considered.
Response to Arguments
Applicant’s arguments on pages 12-16 of Remarks dated 11/17/2025 regarding the rejection under 35 U.S.C. 103 with respect to claims 1-20 limitation “in a physical setting comprising environmental detriments to utilizing sensors comprising Internet of Things devices, wherein the group of sensors of the multiple modalities are integrated into a roaming edge device,” have been fully considered but are moot. New reference Fayyad has been incorporated below to teach the newly presented limitations.
Beginning on page 13, Applicant asserts that equating the inference with both a task and a classification is inaccurate and they cannot both logically read on the “inference”. However, Examiner is interpreting inference as a result being produced for the task or classification.
CRM
Examiner is interpreting computer readable medium as non-transitory in view of paragraph 0098 of the Specification.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 3, 4, 6, 10, 11, 12, 13, 14, 16, 17, 18, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Yadav et al. (US20210201091A1); hereinafter Yadav in view of Yamato et al. (US20200412807A1); hereinafter Yamato and in further view of Fayyad et al. (Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review); hereinafter Fayyad
Claim 1 is rejected over Yadav, Yamato and Fayyad.
Regarding claim 1, Yadav teaches a computer-implemented method, comprising:
engaging, by one or more processors, based on a request for an inference, from a group of sensors of multiple modalities at a physical location, at least one sensor of a main modality to provide data to a pipeline to generate the inference, (“As illustrated in FIG. 2, sensor data 18 a, 18 b, 18 c generated by each sensor 14 a, 14 b, 14 c of the plurality of sensors 14, including first sensor data 18 a from the first sensor 14 a, second sensor data 18 b from the second sensor 14 b, and third sensor data 18 c from the third sensor 14 c, is received by the data quality module 26. The data quality module 26 is configured to assess the quality of the sensor data 18 a, 18 b, 18 c generated from each sensor 14 a, 14 b, 14 c.”; [0044]; and “The plurality of sensors 14 may be selected from a passive infrared (“PIR”) sensor, a thermopile sensor, a microwave sensor, an image sensor, a sound sensor, and/or any other sensor modality.”; [0038]; and “the method further comprises: receiving a classification output from each selected machine learning algorithm for each selected sensor, wherein the classification output is a result generated by performing the task;”; [0009]; Note: The classification is the inference and the first sensor used is the main modality.)
wherein the pipeline comprises one or more machine learning models, and (See Figure 2 of Yadav to see that there is a pipeline of an Artificial Neural Net (ANNN) for each Sensor N (SN).)
wherein the one or more machine learning models generate the inference for a downstream task; (“the method further comprises: receiving a classification output from each selected machine learning algorithm for each selected sensor, wherein the classification output is a result generated by performing the task”; [0009]; Note: The classification is the inference.)
based on the engaging of the at least one sensor of the main modality, obtaining, by the one or more processors, raw data from the at least one sensor of the main modality; (“A data quality module 26 in the controller 12 is arranged to determine the quality of data 18 a, 18 b, 18 c (shown in FIG. 2) received from the sensors 14 a, 14 b, 14 c.”; [0040])
based on the automatically engaging of the at least one sensor of the at least one different modality, obtaining, by the one or more processors, new raw data from the at least one sensor of the at least one different modality; and (“Multiple sensors (for example, image sensors, sound sensors, infrared sensors, etc.) may be arranged in the room to detect what is happening in the room. The context-switching algorithm can be utilized to determine which of the multiple sensors to use given the quality of the data that each sensor is generating.”; [0030]; Note: See Figure 2 of Yadav to see that each Sensor N (SN) processes its own Sensor Data (D-N), therefore each Sensor Data (D-N) after D1 is new raw data.)
applying, by the one or more processors, the one or more machine learning models to the new raw data to derive the inference. (“For every sensor (e.g., 14 a, 14 b, 14 c) of the plurality of sensors 14 which the assessment artificial intelligence program 62 determines to use, a machine learning algorithm (from the predefined set of machine learning algorithms 44 (shown in FIG. 3)) is used to provide a classification output (e.g., 50 a, 50 b). A classification output 50 a, 50 b is an output from the machine learning algorithm (e.g., 44 a, 44 b, 44 c, 44 d) for a particular sensor (e.g., 14 a, 14 b, 14 c) and is a result generated by performing the task 16.”; [0049]; Note: A classification output is the derived inference.)
determining, by the one or more processors, based on the one or more machine learning models and the main modality, one or more modalities which provide data to generate the inference for a downstream task in addition to the main modality data, wherein the at least one sensor of the at least one different modality than the main modality comprises the one or more modalities. (“The ability of a particular machine learning algorithm to use sensor data 18 a, 18 b, 18 c to accurately accomplish a task depends on many factors, including the type of sensor data 18 a, 18 b, 18 c, the quality of sensor data 46 a, 46 b, 46 c, and the specific task. A non-exhaustive list of exemplary machine learning algorithms which can be used to accomplish a task includes support vector machines, nearest neighbors, decision trees, Bayesian algorithms, neural networks, deep learning based algorithms, or any other known machine learning algorithms or combinations thereof. Machine learning algorithms selected to be included set of predetermine machine learning algorithms 44 may be selected based on the known performance of those algorithms in accomplishing the task. As another example, the machine learning algorithms included in the set of predetermine machine learning algorithms 44 may be selected by evaluating their performance accomplishing a given task using a benchmark data set meeting certain quality thresholds.”; [0047]; and “The plurality of sensors 14 may be selected from a passive infrared (“PIR”) sensor, a thermopile sensor, a microwave sensor, an image sensor, a sound sensor, and/or any other sensor modality.”; [0038])
based on determining the one or more modalities which provide data to generate the inference for a downstream task in addition to the main modality data, (“The plurality of sensors 14 may be selected from a passive infrared (“PIR”) sensor, a thermopile sensor, a microwave sensor, an image sensor, a sound sensor, and/or any other sensor modality.”; [0038])
Yadav does not teach in a physical setting comprising environmental detriments to utilizing sensors comprising Internet of Things devices, wherein the group of sensors of the multiple modalities are integrated into a roaming edge device,
However, Fayyad teaches in a physical setting comprising environmental detriments to utilizing sensors comprising Internet of Things devices, wherein the group of sensors of the multiple modalities are integrated into a roaming edge device, (“thermal cameras have been fused with either RGB-D [24] or LiDAR sensors [25,26] to add depth, and hence improve the
system performance; however, this advantage can be dramatically compromised in extreme weather conditions, such as high temperatures”; page 4; and “One of the remaining challenges of self-driving cars is their compromised maneuverability and performance in bad weather conditions, such as rain, snow, dust storms, or fog, which can compromise vision and range measurements (degradation of the visibility distance). In such conditions, the performance of most current active and passive sensors is significantly compromised, which in turn leads to erroneous and even misleading outputs. The consequence of a partial or complete sensor failure can be catastrophic for autonomous vehicles and their surroundings. A possible measure to alleviate this problem is to evaluate the risk of failure early in the process based on learned experiences and historical data using deep learning algorithms and to allow the driver to interrupt or completely disengage the autonomous system.”; [Section 5.1. Harsh Weather Conditions]; Note: An autonomous vehicle uses internet of things to operate and navigate and an autonomous vehicle is an example of a roaming edge device.)
It would have been obvious before the effective filing date to combine the infrared sensor, image sensor and sound sensor of Yadav with the combinations of different sensors of Fayyad to effectively compensate for losses due to outages in certain environmental conditions (Fayyad, page 5). Yadav and Fayyad are analogous art because they both concern multimodal sensors.
Yadav does not teach applying, by the one or more processors, an outlier detector to the raw data to determine if there is an outlier in the raw data;
based on determining that there is an outlier in the raw data, automatically engaging, by the one or more processors, the at least one sensor of the at least one different modality than the main modality from the group of sensors of multiple modalities;
However, Yamato teaches applying, by the one or more processors, an outlier detector to the raw data to determine if there is an outlier in the raw data; (“This session control apparatus switches the first device that outputs input data to the processing module to the second device when the input data fails to satisfy the condition (outlier). Thus, the session control apparatus discontinues input of data failing to satisfy the condition into the processing module and can maintain the quality of input data.”; [0007]; and “The quality conditions relate to the quality of input data (sensing data). Examples of the conditions include an outlier condition, a data dropout frequency condition, and a manufacturer condition. Besides these, the quality conditions may also include a sensor state condition, a sensor installation condition, a sensor maintenance history condition, a data specification condition, and a data resolution condition.”; [0064])
based on determining that there is an outlier in the raw data,
automatically engaging, by the one or more processors, the at least one sensor of the at least one different modality than the main modality from the group of sensors of multiple modalities; (“Each real sensor 12 observes a target to obtain sensing data. Each real sensor 12 may be an image sensor (camera), a temperature sensor, a humidity sensor, an illumination sensor, a force sensor, a sound sensor, a radio frequency identification (RFID) sensor, an infrared sensor, a posture sensor, a rain sensor, a radiation sensor, or a gas sensor. Each real sensor 12 may be any other sensor.”; [0045]; and “The session control apparatus 130 according to the present embodiment switches the input sensor to another real sensor 12 when the quality of input data output to the processing module 150 fails to satisfy conditions regarding the quality of input data output to the processing module 150. Thus, the session control apparatus 130 discontinues input of input data failing to satisfy the conditions into the processing module 150 and can maintain the quality of input data. “; [0040]; Note: These real sensors are of different modalities and conditions include an outlier condition.)
It would have been obvious before the effective filing date to combine the adaptive multimodal sensors of Yadav with the sensor switching unit of Yamato effectively use sensors based on data quality (Yamato, [0007]). Yadav and Yamato are analogous art because they both concern sensor selection based on quality of sensor data.
Claim 2 is rejected over Yadav, Yamato and Fayyad with the incorporation of claim 1.
Regarding claim 2, Yadav does not teach based on determining that there is no outlier in the raw data, applying, by the one or more processors, the one or more machine learning models to the raw data to derive the inference.
However, Yamato teaches based on determining that there is no outlier in the raw data, applying, by the one or more processors, the one or more machine learning models to the raw data to derive the inference. (“The switching determination unit 131 determines whether the input sensor for the processing module 150 is to be switched based on the input quality check result of the input quality check module 120. For example, the switching determination unit 131 determines that the input sensor is not to be switched when the input quality check result is affirmative, and determines that the input sensor is to be switched when the input quality check result is negative.”; [0089])
It would have been obvious before the effective filing date to combine the adaptive multimodal sensors of Yadav with the sensor switching unit of Yamato effectively use sensors based on data quality (Yamato, [0007]). Yadav and Yamato are analogous art because they both concern sensor selection based on quality of sensor data.
Claim 3 is rejected over Yadav, Yamato and Fayyad with the incorporation of claim 1.
Regarding claim 3, Yadav teaches determining, by the one or more processors, the main modality of multiple modalities for sensor data provided to the pipeline to generate the inference; (“As illustrated in FIG. 2, sensor data 18 a, 18 b, 18 c generated by each sensor 14 a, 14 b, 14 c of the plurality of sensors 14, including first sensor data 18 a from the first sensor 14 a, second sensor data 18 b from the second sensor 14 b, and third sensor data 18 c from the third sensor 14 c, is received by the data quality module 26. The data quality module 26 is configured to assess the quality of the sensor data 18 a, 18 b, 18 c generated from each sensor 14 a, 14 b, 14 c.”; [0044]; and “The plurality of sensors 14 may be selected from a passive infrared (“PIR”) sensor, a thermopile sensor, a microwave sensor, an image sensor, a sound sensor, and/or any other sensor modality.”; [0038]; and “the method further comprises: receiving a classification output from each selected machine learning algorithm for each selected sensor, wherein the classification output is a result generated by performing the task;”; [0009]; Note: The classification is the inference and the first sensor used is the main modality.)
obtaining, by the one or more processors, data from the group of sensors of the multiple modalities; (“A data quality module 26 in the controller 12 is arranged to determine the quality of data 18 a, 18 b, 18 c (shown in FIG. 2) received from the sensors 14 a, 14 b, 14 c.”; [0040])
utilizing, by the one or more processors, the data from the group of sensors to train the one or more machine learning models, based on the physical location; and (“the sensor data 18 a, 18 b, 18 c and the sensor data quality metric 46 a, 46 b, 46 c for each sensor 14 a, 14 b, 14 c are inputted into an assessment artificial intelligence program 62 (e.g., artificial neural network 40 a, 40 b, 40 c) operated by processor 22. The assessment artificial intelligence program 62 is a trained, supervised machine learning algorithm whose objective it is to learn if sensor data 18 a, 18 b, 18 c having a determined quality metric 46 a, 46 b, 46 c can lead to an accurate and/or satisfactory execution of a task.”; [0044])
Yadav does not teach generating, by the one or more processors, an outlier detector [for each of the one or more machine learning models,] based on the data from the group of sensors.
However, Yamato teaches generating, by the one or more processors, an outlier detector [for each of the one or more machine learning models,] based on the data from the group of sensors. (“When multiple pieces of real sensor information are received, the prioritizing unit 113 prioritizes the real sensors 12. The prioritizing unit 113 may prioritize each of the real sensors 12 with any criterion. When, for example, the outlier condition is prioritized among the conditions contained in the user data catalogue, the prioritizing unit 113 may assign a higher priority to the real sensor 12 having a lower outlier frequency.”; [0077])
It would have been obvious before the effective filing date to combine the adaptive multimodal sensors of Yadav with the sensor switching unit based on outlier condition in sensor data of Yamato effectively use sensors based on data quality (Yamato, [0007]). Yadav and Yamato are analogous art because they both concern sensor selection based on quality of sensor data.
Claim 4 is rejected over Yadav, Yamato and Fayyad with the incorporation of claim 1.
Regarding claim 4, Yadav teaches generating, by one or more processors, the pipeline. (“Referring to FIG. 2, a data quality module 26 is configured to receive sensor data 18 a, 18 b, 18 c generated by a plurality of sensors 14 in the environment 2 (shown in FIG. 1). Processor 22 (shown in FIG. 1) is arranged to operate an assessment artificial intelligence program 62, such as an artificial neural network 40, which can make determinations about which sensors 14 a, 14 b, 14 c to use, based on the quality of the sensor data 18 a, 18 b, 18 c, to perform a task 16 (shown in FIG. 3). Additionally, the assessment artificial intelligence program 62 can determine which machine learning algorithm from a predetermined set of machine learning algorithms 44 (shown in FIG. 3) should be used to perform the task.”; [0043]; Note: See Figure 2 of Yadav to see that each sensor has its own artificial intelligence pipeline.)
Claim 6 is rejected over Yadav, Yamato and Fayyad with the incorporation of claim 1.
Regarding claim 6, Yadav teaches wherein the pipeline is an artificial intelligence pipeline and the task is an artificial intelligence task. (“Referring to FIG. 2, a data quality module 26 is configured to receive sensor data 18 a, 18 b, 18 c generated by a plurality of sensors 14 in the environment 2 (shown in FIG. 1). Processor 22 (shown in FIG. 1) is arranged to operate an assessment artificial intelligence program 62, such as an artificial neural network 40, which can make determinations about which sensors 14 a, 14 b, 14 c to use, based on the quality of the sensor data 18 a, 18 b, 18 c, to perform a task 16 (shown in FIG. 3). Additionally, the assessment artificial intelligence program 62 can determine which machine learning algorithm from a predetermined set of machine learning algorithms 44 (shown in FIG. 3) should be used to perform the task.”; [0043]; Note: See Figure 2 of Yadav to see that each sensor has its own artificial intelligence pipeline.)
Claim 10 is rejected over Yadav, Yamato and Fayyad with the incorporation of claim 1.
Regarding claim 10, Yadav teaches wherein the at least one sensor of at least one different modality than the main modality comprises all available sensors at the location. (“The plurality of sensors 14 may comprise sensors 14 a, 14 b, 14 c of one type or multiple types and are arranged to provide data to perform a set of preselected tasks. The plurality of sensors 14 may be selected from a passive infrared (“PIR”) sensor, a thermopile sensor, a microwave sensor, an image sensor, a sound sensor, and/or any other sensor modality.”; [0038])
Claim 11 is rejected over Yadav, Yamato and Fayyad.
Regarding claim 11, Yadav teaches a computer program product comprising:
a computer readable storage medium readable by one or more processors of a shared computing environment comprising a computing system and storing instructions for execution by the one or more processors for performing a method comprising: (“The present disclosure may be implemented as a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure”; [0053])
The remainder of claim 11 is claim 1 in the form of a computer program product and is rejected for the same reasons as claim 1 stated above.
Dependent claim 12 is claim 2 in the form of a computer program product and is rejected for the same reasons as claim 2 stated above. For the rejection of the limitations specifically pertaining to the computer program product of claim 11, see the rejection of claim 11 above.
Dependent claim 13 is claim 3 in the form of a computer program product and is rejected for the same reasons as claim 3 stated above. For the rejection of the limitations specifically pertaining to the computer program product of claim 11, see the rejection of claim 11 above.
Dependent claim 14 is claim 4 in the form of a computer program product and is rejected for the same reasons as claim 4 stated above. For the rejection of the limitations specifically pertaining to the computer program product of claim 11, see the rejection of claim 11 above.
Dependent claim 16 is claim 6 in the form of a computer program product and is rejected for the same reasons as claim 6 stated above. For the rejection of the limitations specifically pertaining to the computer program product of claim 11, see the rejection of claim 11 above.
Claim 17 is rejected over Yadav, Yamato and Fayyad.
Regarding claim 17, Yadav teaches a computer system comprising:
a group of sensors of multiple modalities communicatively coupled to one or more processors; (“In an aspect, the plurality of sensors include at least one of a PIR sensor, a thermopile sensor, a microwave sensor, an image sensor, and a sound sensor.”; [0011])
a memory; (“The plurality of non-transitory computer readable instructions arranged to be stored and executed on a memory and a processor.”; [0013])
the one or more processors in communication with the memory;
program instructions executable by the one or more processors to perform a method, the method comprising: (“The plurality of non-transitory computer readable instructions arranged to be stored and executed on a memory and a processor.”; [0013])
The remainder of claim 17 is claim 1 in the form of a computer system and is rejected for the same reasons as claim 1 stated above.
Dependent claim 18 is claim 2 in the form of a computer system and is rejected for the same reasons as claim 2 stated above. For the rejection of the limitations specifically pertaining to the computer program product of claim 17, see the rejection of claim 17 above.
Dependent claim 19 is claim 3 in the form of a computer system and is rejected for the same reasons as claim 3 stated above. For the rejection of the limitations specifically pertaining to the computer program product of claim 17, see the rejection of claim 17 above.
Claim 21 is rejected over Yadav, Yamato and Fayyad with the incorporation of claim 1.
Yadav does not teach wherein the environmental detriments comprise elevated temperatures.
However, Fayyad teaches wherein the environmental detriments comprise elevated temperatures. (“thermal cameras have been fused with either RGB-D [24] or LiDAR sensors [25,26] to add depth, and hence improve the system performance; however, this advantage can be dramatically compromised in extreme weather conditions, such as high temperatures”; page 4)
It would have been obvious before the effective filing date to combine the infrared sensor, image sensor and sound sensor of Yadav with the combinations of different sensors of Fayyad to effectively compensate for losses due to outages in certain environmental conditions (Fayyad, page 5). Yadav and Fayyad are analogous art because they both concern multimodal sensors.
Claims 5, 9, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yadav, Yamato and Fayyad in further view of Gonzalez Aguirre et al. (US20190135300A1, Methods and apparatus for unsupervised multimodal anomaly detection for autonomous vehicles); hereinafter; Gonzalez Aguirre
Claim 5 is rejected over Yadav, Yamato, and Fayyad and Gonzalez Aguirre with the incorporation of claim 1.
Regarding claim 5, Yadav does not teach wherein the raw data comprises unlabeled data.
However, Gonzalez Aguirre teaches wherein the raw data comprises unlabeled data. (“FIGS. 10A and 10B depict an example end-to-end system training data flow 1000 of the anomaly detection apparatus 306 of FIGS. 3 and 4 to perform unsupervised multimodal anomaly detection for autonomous vehicles using the example feature fusion and deviation data flow 700 of FIG. 7. The example end-to-end system training data flow 1000 is shown as including six phases. At an example first phase (1) 1002 (FIG. 10A), the sensor data interface 402 (FIG. 4) obtains multimodal raw sensor data samples (Ii(x,y,t)) (unlabeled data) from a database to train auto-encoders for corresponding ones of the sensors 202, 204, 206, 304 (FIGS. 2 and 3).”; [0069])
It would have been obvious before the effective filing date to combine the multimodal sensors of Yadav with the unsupervised multimodal anomaly detection of Gonzalez Aguirre to effectively perform anomaly detection for autonomous systems (Gonzalez Aguirre, [0017]). Yadav and Gonzalez Aguirre are analogous art because they both concern processing data using multimodal sensors.
Claim 9 is rejected over Yadav, Yamato, and Fayyad and Gonzalez Aguirre with the incorporation of claim 1.
Regarding claim 9, Yadav teaches wherein the main modality is selected from the group consisting of: optical, audio, infrared, and (“The plurality of sensors 14 may be selected from a passive infrared (“PIR”) sensor (infrared), a thermopile sensor, a microwave sensor, an image sensor (optical), a sound sensor (audio), and/or any other sensor modality.”; [0038])
Yadav does not teach light detecting and ranging.
However, Gonzalez Aguirre teaches light detecting and ranging. (“Autonomous robotic systems such as autonomous vehicles use multiple cameras as well as range sensors to perceive characteristics of their environments. The different sensor types (e.g., infrared (IR) sensors, red-green-blue (RGB) color cameras, Light Detection and Ranging (LIDAR) sensors, Radio Detection and Ranging (RADAR) sensors, SOund Navigation And Ranging (SONAR) sensors, etc.) can be used together in heterogeneous sensor configurations useful for performing various tasks of autonomous vehicles”; [0016])
It would have been obvious before the effective filing date to combine the infrared sensor, image sensor and sound sensor of Yadav with the LIDAR sensor of Gonzalez Aguirre to effectively perform various autonomous vehicle tasks (Gonzalez Aguirre, [0016]). Yadav and Gonzalez Aguirre are analogous art because they both concern processing data using multimodal sensors.
Dependent claim 15 is claim 5 in the form of a computer program product and is rejected for the same reasons as claim 5 stated above. For the rejection of the limitations specifically pertaining to the computer program product of claim 11, see the rejection of claim 11 above.
Claim 20 is rejected over Yadav, Yamato and Gonzalez Aguirre with the incorporation of claim 17.
Regarding claim 20, Yadav does not teach wherein a roaming edge device comprises the group of sensors of the multiple modalities communicatively and the one or more processors.
However, Gonzalez Aguirre teaches wherein a roaming edge device comprises the group of sensors of the multiple modalities communicatively and the one or more processors. (“In the heterogeneous sensor configuration of FIG. 1, the autonomous vehicle 100 (roaming edge device) is provided with camera sensors, RADAR sensors, and LIDAR sensors”; [0019])
It would have been obvious before the effective filing date to combine the infrared sensor, image sensor and sound sensor of Yadav with the LIDAR sensor of Gonzalez Aguirre to effectively perform various autonomous vehicle tasks (Gonzalez Aguirre, [0016]). Yadav and Gonzalez Aguirre are analogous art because they both concern processing data using multimodal sensors.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TRAN whose telephone number is (703)756-1525. The examiner can normally be reached M-F 9:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H TRAN/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147