DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 09/15/2025 have been fully considered and they are persuasive.
Regarding applicant’s remarks directed to the rejection of claims under 35 USC § 101, the applicant argues that the amended claims directed to a technical improvement. Examiner respectfully agrees and withdraws the prior rejection of claims under 35 USC § 101.
Regarding applicant’s remarks directed to the rejection of claims under 35 USC § 102, the arguments are directed to newly amended limitations that were not previously examined by the examiner. Therefore, applicants arguments are rendered moot. The examiner refers to the rejection under 35 USC § 103 in the current office action for more details.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Pub No. US20190103026A1 Liu et al. (“Liu”).
In regards to claim 1,
Liu teaches An information processing apparatus, comprising: circuitry configured to:
(Liu, [0066], “These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like.”)
Liu teaches collect first sensor information detected by a first sensor ie object detector 320 and second sensor information detected by a second sensor ie tracker 330;
PNG
media_image1.png
398
701
media_image1.png
Greyscale
(Liu, “[0040] The object detector 320 detects objects in cropped portions of image frames of input image data determined by the cropping engine 310. Additionally, the object detector 320 may perform object detection using image parameters received from the cropping engine 310 including image type or resolution, camera or video frame rate, scale used for cropping, etc [first sensor information detected by a first sensor]. The input image data may include image frames of a video clip having a predetermined duration (e.g., 30-60 seconds). The object detector 320 may use one or more types of object detection models or techniques known to one skilled in the art including, but not limited to, single shot detection (SSD), normalized cross-correlation (NCC), machine learning or deep learning architectures (e.g., neural networks, convolutions, Bayesian inference, linear classification, loss functions, optimizations, generative models, principle component analysis, Tensorflow, etc.), radius search, particle filter, hybrid sampling, optical flow, or non-maximum suppression post-processing. In an embodiment based on SSD, the object detector 320 can detect an object within 400 milliseconds of compute time, which is an improvement from existing object detection methods that require over one second of compute time to detect an object.”)
(Liu, “[0042] The tracker 330 tracks objects in image frames detected by the object detector 320. The tracker 330 may receive image parameters from the cropping engine 310 as input to perform tracking [second sensor information detected by a second sensor; wherein the object detector and cropping engine and attention processing engine would altogether make up the second sensor].”)
Liu further explicitly discloses multiple sensors
(Liu, “[0056] In one embodiment, the collision warning system 100 receives 802 sensor data associated with a vehicle 140 captured by sensors of a client device 110. The sensor data includes motion data and images of fields of view from the vehicle 140 (e.g., a front-facing dashboard view).”)
Liu teaches synchronize the first sensor information and the second sensor information based on a frame rate at which one of the first sensor information or the second sensor information is acquired;
(Liu, “[0045] In embodiments using synchronous detection and tracking, the object detector 320 and tracker 330 both process image data at a same frame rate (e.g., 30 frames per second). In contrast, in a different embodiment using asynchronous detection and tracking, the object detector 320 may perform down sampling (e.g., decrease from 30 to three frames per second) to speed up object detection processing in conjunction with the tracker 330 [synchronize ie either performing at the same frate rate or down sampling to speed up in conjunction to the tracker ie the second sensor the first sensor information and the second sensor information based on a frame rate at which one of the first sensor information or the second sensor information is acquired].”)
Liu teaches generate a learned model for estimation of the second sensor information corresponding to the first sensor information, wherein the generation of the learned model is based on input of the first sensor information and the second sensor information in synchronization with the first sensor information
(Liu, [0046], “In some embodiments, the tracker 330 uses a Bayesian model to update a probability density function for predicting motion or position of an object along a trajectory [generate a learned model ie Bayesian Model for estimation of the second sensor information ie predicted motion or position of an object corresponding to the first sensor information wherein the first sensor information is the detected object(s), wherein the generation of the learned model is based on input of the first sensor information and the second sensor information in synchronization with the first sensor information wherein the images are processed in synchronization as taught above].”)
Liu teaches and the generation of the learned model includes: storage of a pair of the first sensor information and the second sensor information as training data based on a match between sensing environments of the first sensor information and the second sensor information;
(Liu, “[0042] The tracker 330 tracks objects in image frames detected by the object detector 320. The tracker 330 may receive image parameters from the cropping engine 310 as input to perform tracking. The tracker 330 determines trajectory information (also referred to as features or object data), which may include a distance between a vehicle 140 and another object, or a rate of change of the distance (e.g., indicating that the vehicle 140 is quickly approaching the other object)…
[0046] In an embodiment using synchronous detection and tracking, the tracker 330 determines matches (e.g., data association) between one or more detected objects and one or more trajectories at a given frame [storage of a pair of the first sensor information and the second sensor information …based on a match between sensing environments of the first sensor information and the second sensor information; wherein the detected object and one or more trajectories must be stored in order to determine a match]. The tracker 330 determines matches based on an affinity score, which accounts for similarity in appearance (e.g., relative position or dimensions in image frames) between a detected object and trajectory, as well as motion consistency between the detected object and trajectory. The tracker 330 may determine motion consistency based on a measure of intersection over a union of the detected object and trajectory in a constant position-based motion model and a constant velocity-based motion model. The tracker 330 selects the motion model corresponding to the greater measure of intersection, i.e., closer match. Using this process, the tracker 330 may match one or more features across multiple frames, where the features follow a given predicted trajectory. Example features may include objects such as vehicles, people, or structures, as well as low-level features such as a corner or window of a building, or a portion of a vehicle [as training data wherein the data is subsequently utilized to train a Bayesian model]. In some embodiments, the tracker 330 uses a Bayesian model to update a probability density function for predicting motion or position of an object along a trajectory.”)
Liu teaches and change of a coupling weighting coefficient between nodes of the learned model based on the training data; store the learned model;
(Liu, [0046], “In some embodiments, the tracker 330 uses a Bayesian model to update a probability density function for predicting motion or position of an object along a trajectory [change of a coupling weighting coefficient between nodes of the learned model based on the training data; store the learned model; wherein the learned model must be stored on the tracker in order to update it]. The tracker 330 may recursively update a current state of the Bayesian model using new camera data or sensor measurements. The tracker 330 may also determine predictions by implementing algorithms known to one skilled in the art including, e.g., Global Nearest Neighbor (GNN), the Hungarian Algorithm, and K-best assignment. The tracker 330 may calculate the affinity score by summing sub-scores for the similarity in appearance and motion consistency, which may be a weighted average.”)
Liu teaches and provide a result of a service based on the learned model.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330) motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [provide a result of a service based on the learned model; wherein the service is the Collision Warning System].”)
In regards to claim 2,
Liu teaches The information processing apparatus according to claim 1, wherein the circuitry is further configured to: request sensor information from a first apparatus ie tracker 330 including one of the first sensor or the second sensor; and collect one of the first sensor information or the second sensor information from the first apparatus based on the request.
(Liu, [0046], “The tracker 330 may recursively update a current state of the Bayesian model using new camera data or sensor measurements.”; wherein the tracker recursively updates the model based on new data (wherein new data is interpreted to be requesting and collecting sensor information))
In regards to claim 3,
Liu teaches The information processing apparatus according to claim 1, wherein the circuitry is further configured to: generate the learned model for each combination of the first sensor and the second sensor;
[0046] In an embodiment using synchronous detection and tracking, the tracker 330 determines matches (e.g., data association) between one or more detected objects and one or more trajectories at a given frame. The tracker 330 determines matches based on an affinity score, which accounts for similarity in appearance (e.g., relative position or dimensions in image frames) between a detected object and trajectory, as well as motion consistency between the detected object and trajectory. The tracker 330 may determine motion consistency based on a measure of intersection over a union of the detected object and trajectory in a constant position-based motion model and a constant velocity-based motion model. The tracker 330 selects the motion model corresponding to the greater measure of intersection, i.e., closer match. Using this process, the tracker 330 may match one or more features across multiple frames [for each combination of the first sensor and the second sensor; interpreted to be matches across a given frame or multiple frames], where the features follow a given predicted trajectory. Example features may include objects such as vehicles, people, or structures, as well as low-level features such as a corner or window of a building, or a portion of a vehicle. In some embodiments, the tracker 330 uses a Bayesian model [generate the learned model] to update a probability density function for predicting motion or position of an object along a trajectory.”)
Liu teaches accumulate one or more of the learned model generated for each combination of the first sensor and the second sensor;
(Liu, [0046], “The tracker 330 may also determine predictions by implementing algorithms known to one skilled in the art including, e.g., Global Nearest Neighbor (GNN), the Hungarian Algorithm, and K-best assignment. The tracker 330 may calculate the affinity score by summing sub-scores for the similarity in appearance and motion consistency, which may be a weighted average [accumulate one or more of the learned model generated for each combination of the first sensor and the second sensor; wherein GNN, K-best, etc may also be utilized; thus, accumulating one or more learned models; however, Examiner respectfully notes that only one model is required and thus, for subsequent limitations, only the Bayesian model is utilized].”)
Liu teaches selectively read the learned model corresponding to a request from a second apparatus; and provide the result of the service based on the read learned model.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330 [selectively read the learned model corresponding to a request from a second apparatus; wherein the decision engine is the second apparatus and the decision engine is reading from the tracker]) motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [provide the result of the service based on the read learned model].”)
In regards to claim 4,
Liu teaches The information processing apparatus according to claim 1, wherein the circuitry is further configured to provide the result of the service based on the learned model and a request from a second apparatus.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330) motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [provide the result of the service based on the learned model and a request from a second apparatus; wherein the second apparatus is the decision engine and the decision engine requests data from the tracker (thus, the result of the service is based on the learned model and a request)].”)
In regards to claim 5,
Liu teaches The information processing apparatus according to claim 1, wherein the circuity is further configured to provide the learned model to a second apparatus as the result of the service.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330) motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [provide the learned model to a second apparatus as the result of the service; wherein the second apparatus is the decision engine and the decision engine requests data from the tracker (thus the tracker provides the learned model)].”)
In regards to claim 6,
Liu teaches The information processing apparatus according to claim 1, wherein the circuitry is further configured to: estimate the second sensor information corresponding to the first sensor information transmitted from a second apparatus by using the learned model; and return the estimated second sensor information, as the result of the service.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330) [estimate the second sensor information corresponding to the first sensor information transmitted from a second apparatus by using the learned model] motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [and return the estimated second sensor information, as the result of the service].”)
Claim 7 is rejected on the same rationale under 35 U.S.C. 102 as claim 1.
In regards to claim 8,
Liu teaches An information processing apparatus, comprising: a first sensor configured to detect first sensor information;
(Liu, “[0040] The object detector 320 detects objects in cropped portions of image frames of input image data determined by the cropping engine 310. Additionally, the object detector 320 may perform object detection using image parameters received from the cropping engine 310 including image type or resolution, camera or video frame rate, scale used for cropping, etc [a first sensor configured to detect first sensor information]. The input image data may include image frames of a video clip having a predetermined duration (e.g., 30-60 seconds). The object detector 320 may use one or more types of object detection models or techniques known to one skilled in the art including, but not limited to, single shot detection (SSD), normalized cross-correlation (NCC), machine learning or deep learning architectures (e.g., neural networks, convolutions, Bayesian inference, linear classification, loss functions, optimizations, generative models, principle component analysis, Tensorflow, etc.), radius search, particle filter, hybrid sampling, optical flow, or non-maximum suppression post-processing. In an embodiment based on SSD, the object detector 320 can detect an object within 400 milliseconds of compute time, which is an improvement from existing object detection methods that require over one second of compute time to detect an object.”)
Liu teaches and circuitry configured to receive, that generates a learned model, a result of a service based on the learned model for estimation of second sensor information corresponding to the first sensor information,
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330 [receive from a third apparatus that generates a learned model; wherein the decision engine receives data from the tracker ie third apparatus]) motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [a result of a service based on the learned model for estimation of second sensor information corresponding to the first sensor information].”)
Liu teaches wherein the generation of the learned model is based on synchronization of the first sensor information and the second sensor information,
(Liu, “[0045] In embodiments using synchronous detection and tracking, the object detector 320 and tracker 330 both process image data at a same frame rate (e.g., 30 frames per second). In contrast, in a different embodiment using asynchronous detection and tracking, the object detector 320 may perform down sampling (e.g., decrease from 30 to three frames per second) to speed up object detection processing in conjunction with the tracker 330 [wherein the generation of the learned model is based on synchronization of the first sensor information and the second sensor information].”)
Liu teaches and the generation of the learned model includes: storage of a pair of the first sensor information and the second sensor information as training data based on a match between sensing environments of the first sensor information and the second sensor information;
(Liu, “[0042] The tracker 330 tracks objects in image frames detected by the object detector 320. The tracker 330 may receive image parameters from the cropping engine 310 as input to perform tracking. The tracker 330 determines trajectory information (also referred to as features or object data), which may include a distance between a vehicle 140 and another object, or a rate of change of the distance (e.g., indicating that the vehicle 140 is quickly approaching the other object)…
[0046] In an embodiment using synchronous detection and tracking, the tracker 330 determines matches (e.g., data association) between one or more detected objects and one or more trajectories at a given frame [storage of a pair of the first sensor information and the second sensor information …based on a match between sensing environments of the first sensor information and the second sensor information; wherein the detected object and one or more trajectories must be stored in order to determine a match]. The tracker 330 determines matches based on an affinity score, which accounts for similarity in appearance (e.g., relative position or dimensions in image frames) between a detected object and trajectory, as well as motion consistency between the detected object and trajectory. The tracker 330 may determine motion consistency based on a measure of intersection over a union of the detected object and trajectory in a constant position-based motion model and a constant velocity-based motion model. The tracker 330 selects the motion model corresponding to the greater measure of intersection, i.e., closer match. Using this process, the tracker 330 may match one or more features across multiple frames, where the features follow a given predicted trajectory. Example features may include objects such as vehicles, people, or structures, as well as low-level features such as a corner or window of a building, or a portion of a vehicle [as training data wherein the data is subsequently utilized to train a Bayesian model].”)
Liu teaches and change of a coupling weighting coefficient between nodes of the learned model based on the training data.
(Liu, [0046], “In some embodiments, the tracker 330 uses a Bayesian model to update a probability density function for predicting motion or position of an object along a trajectory [change of a coupling weighting coefficient between nodes of the learned model based on the training data]. The tracker 330 may recursively update a current state of the Bayesian model using new camera data or sensor measurements. The tracker 330 may also determine predictions by implementing algorithms known to one skilled in the art including, e.g., Global Nearest Neighbor (GNN), the Hungarian Algorithm, and K-best assignment. The tracker 330 may calculate the affinity score by summing sub-scores for the similarity in appearance and motion consistency, which may be a weighted average.”)
In regards to claim 9,
Liu teaches The information processing apparatus according to claim 8, wherein the circuitry is further configured to: request the result of the service from the third apparatus; and receive the result of the service returned from the third apparatus based on the request.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330) [request the result of the service from the third apparatus] motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [receive the result of the service returned from the third apparatus based on the request].”)
In regards to claim 11,
Liu teaches The information processing apparatus according to claim 8, wherein the circuitry is further configured to: receive, from the third apparatus, the learned model as the result of the service; and estimate the second sensor information corresponding to the first sensor information using the received learned model.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330) [receive, from the third apparatus, the learned model as the result of the service] motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110 [estimate the second sensor information corresponding to the first sensor information using the received learned model].”)
In regards to claim 12,
Liu teaches The information processing apparatus according to claim 8, wherein the circuitry is further configured to: transmit the first sensor information detected by the first sensor to the third apparatus;
(Liu, “[0042] The tracker 330 tracks objects in image frames detected by the object detector 320.”)
Liu teaches and receive as the result of the service, from the third apparatus, the second sensor information corresponding to first sensor information, wherein the second sensor information is estimated using the learned model.
(Liu, [0057], “The decision engine 350 determines 810 a probability of a potential collision between the vehicle 140 and the object by tracking (by the tracker 330) [receive as the result of the service, from the third apparatus, the second sensor information corresponding to first sensor information, wherein the second sensor information is estimated using the learned model] motion of the object using the images. In addition, the tracker 330 may determine a change in distance between the vehicle and the object using an amount of pixel shift between a first and second (e.g., consecutive) image frames. In an embodiment, the probability of potential collision is determined using a machine learning model trained using features determined using the images, trajectory of the object, and a predicted time-to-collision of the vehicle with the object. Responsive to determining that the probability is greater than a threshold value, the decision engine 350 provides 812 a notification of the potential collision for presentation by the client device 110.”)
Claim 13 is rejected on the same rationale under 35 U.S.C. 102 as claim 9
Claim 14 and 15 are rejected on the same rationale under 35 U.S.C. 102 as claim 7
Claim 16-17 are rejected on the same rationale under 35 U.S.C. 102 as claim 12
Claim 18 is rejected on the same rationale under 35 U.S.C. 102 as claim 14.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US20080158256A1 Russell et al. teaches Method and system for providing a perspective view image by intelligent fusion of a plurality of sensor data
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASMINE THAI whose telephone number is (703)756-5904. The examiner can normally be reached M-F 8-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.T.T./Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129