DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims
Claims 1, 9 and 18 are amended.
Claims 1, 5-9, 13-18 are pending.
No new claim is added.
Applicant’s amendments are entered. Applicant’s remarks are also entered into the record. A new search was made necessitated by the applicant’s amendments and remarks.
Response to arguments
Applicant’s remarks and arguments are respectfully considered. Examiner stand for the argument in page 11 “Levinson does not teach or suggest the features absent from Waschulzik” that Levinson teaches autonomous vehicle which is not rail-bound vehicle that cannot be steered (See Levinson [column 16, lines 65 ]Such a response and/or other action 126 may also include controlling the drive module(s) 314 to change a speed, direction, and/or other operating parameter of the vehicle 302, see para[0002]), track of the object data that is different from a motion of the vehicle (see Levinson generate sensor data indicative of objects in an environment which is similar to object data and which is not same as motion of the vehicle) and evaluating…respective sensor…,separately (see Levinson figure 5, image data 110, lidar sensor data 112, sensor data 114 are separate and compared separately with the perception system data at a particular time)).
Examiner respectfully traverse the argument on page 11, and reject the claims on view of Levinson.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Sensor data storing unit-claim 9
Object data determining unit-claim 9
Evaluating unit-claims 9 and 16
Validation determination unit-claim 9
Disabling unit-claim 9
Enabling unit-claim 9
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1,5,8-9, 13,16-18 are rejected under 35 U.S.C. 102(a) (2) as being anticipated by US10964349B2 to Levinson (herein after “Levinson”).
Regarding claim 1, Levinson teaches A method performed by an object validation system (see Levinson claim 4, column 1, lines 10-line 30] The perception system may identify a group of objects present in the environment based on the sensor data) for validating perceived surrounding objects to support safety-critical threat assessment governing emergency maneuvering(See Levinson column 2 identifying errors associated with individual sensor modalities by identifying respective groups of objects using data generated by the individual sensor modalities ) of an Automated Driving System, ADS, on-board a road-bound vehicle, (See Levinson figure 2),
the method comprising:
storing in respective sensor/modality-specific data buffers, respective sensor/modality- specific sensor data obtained at least during a predeterminable time interval at least one of continuously and intermittently from one or more vehicle-mounted surrounding detecting sensors; (See Levinson at least [column 2 page 37-column 4 lines 13] a plurality of sensors disposed on a vehicle, such as an autonomous vehicle, and operably connected to one or more processors and/or remote computing devices, The first sensor may include an image capture device, and the first signal may include image data representing a scene, a second sensor disposed on the vehicle, and the second sensor may include a LIDAR sensor), one or more additional sensors (e.g., a RADAR sensor, a SONAR sensor, a depth sensing camera, time of flight sensors, etc.) disposed on the vehicle and configured to detect objects in the environment of the vehicle]
determining with support from a perception module configured to generate perception data based on sensor data from one or more vehicle-mounted surrounding detecting sensors, object data of a perceived object valid for the time interval (see Levinson [column 3 -column 4 line 13] through one or more data fusion processes, a perception system of the present disclosure may generate fused sensor data that represents the environment…. determined, and/or otherwise indicated by the perception system as being be present within the environment based at least in part on the sensor data received from the individual sensor modalities.)
, the object data comprising a track of the object that is different from a motion of the vehicle (See Levinson generate sensor data indicative of objects in an environment, which is clearly different from the motion of the vehicle);
evaluating one or more of the respective sensor/modality-specific data buffers, separately, in view of the track of the object data; (See Levinson figure 3 it is assumed implicitly disclose that the data received form an individual sensor is stored in the vehicle memory separately, figure 5 (image data 110, lidar sensor data 112, sensor data 114 are separate and compared separately with the perception system data at a particular time)
determining that the perceived object is a validated object when the track of the object data matches sensed objects in the one or more respective sensor/modality-specific data buffers according to predeterminable matching criteria (See Levinson [column 30 lines 8-12] Such object association processes may be performed by the perception system 116 using one or more data alignment, feature matching, and/or other data mapping techniques.)
, for one or both of a predetermined number and a predetermined constellation of the one or more respective sensor/modality-specific data buffers, and otherwise is an unvalidated object (See Levinson [column 39, lines 9-lines 28] “Yes,” a difference in the various parameters being compared exists (and/or meets or exceeds some threshold). For example, the perception system 116 may determine at 712 that, “Yes,” a first classification of an object (e.g., a first parameter) associated with the first group of objects 128 is different from a fourth classification of the object (e.g., a fourth parameter) associated with the fourth group of objects 120, and the system may proceed to 714 and/or to 716.);
disabling the perceived object from being considered in safety-critical threat assessment comprising emergency steering operations at least partially on a road surface upon which the road-bound vehicle is located (See Levinson column 13 In at least one example, the one or more system controller(s) 326 can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 302) when the perceived object is determined to be an unvalidated object (See Levinson column 16,lines 46-65] such a response and/or other action 126 may include, among other things, at least one of ignoring a portion of the image data 110) ; and
enabling the perceived object to be considered in the safety-critical threat assessment when the perceived object is determined to be a validated object. (See Levinson [column 29, lines 25-30] The object detection system 118 of the perception system 116 may be configured to, for example, determine whether any differences exist between the groups of objects 502, 508, 514, and/or between any parameters associated with the groups of objects 502, 508, 514.), column 14 ,figure 7 , [column 39,line 8-line 27] Levinson does not expressly teaches its enabling, disabling the perceived object however, Levinson teaches identifying an error associated with data included in the first signal and/or the second signal)
Regarding claim 5, Levinson teaches wherein the matching criteria comprises finding a match for one or more of a predeterminable number of and combination of the one or more respective sensor/modality-specific data buffers. (See Levinson [column 21 lines 37-43] the vehicle 202 may include any number of sensors in any combination or configuration. For example, the vehicle 202 includes at least sensors 402, 404, and 406.)
Regarding claim 8, Levinson teaches wherein the evaluating comprises comparing error regions deduced from potential sensor measurement errors and error regions deduced from safety requirements associated with the perceived object's states. (see Levinson at least [column 7 line 56-column 8 lines 15] In some examples, the perception system 116 may determine an error associated with data included in one or more respective sensor signals. For example, the perception system 116 may determine whether an object 122 included in the fused sensor data 134 (e.g., included in the particular group of objects 120) is absent from or misclassified in the group of objects 128 associated with (e.g., determined based on) the image data 110, the group of objects 130 associated with (e.g., determined based on) the LIDAR sensor data 112, and/or the group of objects 132 associated with (e.g., determined based on) the sensor data 114).
Regarding claim 9, Levinson teaches An object validation system for validating perceived surrounding objects to support safety-critical threat assessment governing emergency maneuvering (See Levinson column 2 identifying errors associated with individual sensor modalities by identifying respective groups of objects using data generated by the individual sensor modalities) of an Automated Driving System, ADS, on-board a road-bound vehicle (See Levinson figure 2),
, the object validation system comprising:
a sensor data storing unit configured to store, in respective sensor/modality-specific data buffers, respective sensor/modality-specific sensor data obtained one or more of at least during a predeterminable time interval continuously and intermittently from one or more vehicle-mounted surrounding detecting sensors; (See Levinson at least [column 2 page 37-column 4 lines 13] a plurality of sensors disposed on a vehicle, such as an autonomous vehicle, and operably connected to one or more processors and/or remote computing devices, The first sensor may include an image capture device, and the first signal may include image data representing a scene, a second sensor disposed on the vehicle, and the second sensor may include a LIDAR sensor), one or more additional sensors (e.g., a RADAR sensor, a SONAR sensor, a depth sensing camera, time of flight sensors, etc.) disposed on the vehicle and configured to detect objects in the environment of the vehicle]
an object data determining unit configured to determine, with support from a perception module configured to generate perception data based on sensor data from one or more vehicle- mounted surrounding detecting sensors, object data of a perceived object valid for the time interval, (see Levinson [column 3 -column 4 line 13] through one or more data fusion processes, a perception system of the present disclosure may generate fused sensor data that represents the environment…. determined, and/or otherwise indicated by the perception system as being be present within the environment based at least in part on the sensor data received from the individual sensor modalities.)
the object data comprising a track of the object that is different from a motion of the vehicle; (See Levinson generate sensor data indicative of objects in an environment, which is clearly different from the motion of the vehicle);
an evaluating unit configured to evaluate one or more of the respective sensor/modality- specific data buffers, separately, in view of the track of the object data (See Levinson figure 3 it is assumed implicitly disclose that the data received form an individual sensor is stored in the vehicle memory separately, figure 5 (image data 110, lidar sensor data 112, sensor data 114 are separate and compared separately with the perception system data at a particular time) ;
a validation determining unit configured to determine that the perceived object is a validated object when the track of the object data matches sensed objects in the one or more respective sensor/modality-specific data buffers according to predeterminable matching criteria, (See Levinson [column 30 lines 8-12] Such object association processes may be performed by the perception system 116 using one or more data alignment, feature matching, and/or other data mapping techniques.)
for one or both of a predetermined number and a predetermined constellation of the one or more respective sensor/modality-specific data buffers, and otherwise is an unvalidated object(See Levinson [column 39, lines 9-lines 28] “Yes,” a difference in the various parameters being compared exists (and/or meets or exceeds some threshold). For example, the perception system 116 may determine at 712 that, “Yes,” a first classification of an object (e.g., a first parameter) associated with the first group of objects 128 is different from a fourth classification of the object (e.g., a fourth parameter) associated with the fourth group of objects 120, and the system may proceed to 714 and/or to 716.);
a disabling unit configured to disable the perceived object from being considered in safety-critical threat assessment comprising emergency steering operations at least partially on a road surface upon which the road-bound vehicle is located when the perceived object is determined to be an unvalidated object (See Levinson column 16,lines 46-65] such a response and/or other action 126 may include, among other things, at least one of ignoring a portion of the image data 110); and an enabling unit configured to enable the perceived object to be considered in the safety- critical threat when the perceived object is determined to be a validated object. (See Levinson [column 29, lines 25-30] The object detection system 118 of the perception system 116 may be configured to, for example, determine whether any differences exist between the groups of objects 502, 508, 514, and/or between any parameters associated with the groups of objects 502, 508, 514.), column 14 ,figure 7 , [column 39,line 8-line 27] Levinson does not expressly teaches its enabling, disabling the perceived object however, Levinson teaches identifying an error associated with data included in the first signal and/or the second signal)
Regarding claim 13, Levinson teaches the object validation system according to wherein the matching criteria comprises finding a match for one or more of a predeterminable number of and combination of the one or more respective sensor/modality-specific data buffers. (See Levinson [column 21 lines 37-43] the vehicle 202 may include any number of sensors in any combination or configuration. For example, the vehicle 202 includes at least sensors 402, 404, and 406).
Regarding claim 16, Levinson teaches wherein the evaluating unit is further configured to compare error regions deduced from potential sensor measurement errors and error regions deduced from safety requirements associated with the perceived object's states. (see Levinson at least [column 7 line 56-column 8 lines 15] In some examples, the perception system 116 may determine an error associated with data included in one or more respective sensor signals. For example, the perception system 116 may determine whether an object 122 included in the fused sensor data 134 (e.g., included in the particular group of objects 120) is absent from or misclassified in the group of objects 128 associated with (e.g., determined based on) the image data 110, the group of objects 130 associated with (e.g., determined based on) the LIDAR sensor data 112, and/or the group of objects 132 associated with (e.g., determined based on) the sensor data 114).
Regarding claim 17, Levinson teaches wherein the object validation system in comprised in a vehicle. (See Levinson column 2 a plurality of sensors dis posed on a vehicle, such as an autonomous vehicle)
Regarding claim 18, Levinson teaches A non-volatile computer readable storage medium having stored there on a computer program that when executed causes at least one of a computer and a processor to perform a method for validating perceived surrounding objects to support safety- critical threat assessment governing emergency maneuvering (See Levinson column 2 identifying errors associated with individual sensor modalities by identifying respective groups of objects using data generated by the individual sensor modalities ) of an Automated Driving System, ADS, on-board a road-bound vehicle (See Levinson figure 2), the method comprising:
storing in respective sensor/modality-specific data buffers, respective sensor/modality- specific sensor data obtained at least during a predeterminable time interval at least one of continuously and intermittently from one or more vehicle-mounted surrounding detecting sensors; (See Levinson at least [column 2 page 37-column 4 lines 13] a plurality of sensors disposed on a vehicle, such as an autonomous vehicle, and operably connected to one or more processors and/or remote computing devices, The first sensor may include an image capture device, and the first signal may include image data representing a scene, a second sensor disposed on the vehicle, and the second sensor may include a LIDAR sensor), one or more additional sensors (e.g., a RADAR sensor, a SONAR sensor, a depth sensing camera, time of flight sensors, etc.) disposed on the vehicle and configured to detect objects in the environment of the vehicle]
determining with support from a perception module configured to generate perception data based on sensor data from one or more vehicle-mounted surrounding detecting sensors, object data of a perceived object valid for the time interval, (see Levinson [column 3 -column 4 line 13] through one or more data fusion processes, a perception system of the present disclosure may generate fused sensor data that represents the environment…. determined, and/or otherwise indicated by the perception system as being be present within the environment based at least in part on the sensor data received from the individual sensor modalities.)
the object data comprising a track of the object that is different from a motion of the vehicle; (See Levinson generate sensor data indicative of objects in an environment, which is clearly different from the motion of the vehicle);
evaluating one or more of the respective sensor/modality-specific data buffers, separately, in view of the track of the object data; (See Levinson figure 3 it is assumed implicitly disclose that the data received form an individual sensor is stored in the vehicle memory separately, figure 5 (image data 110, lidar sensor data 112, sensor data 114 are separate and compared separately with the perception system data at a particular time) ;
determining that the perceived object is a validated object when the track of the object data matches sensed objects in the one or more respective sensor/modality-specific data buffers according to predeterminable matching criteria, (See Levinson [column 30 lines 8-12] Such object association processes may be performed by the perception system 116 using one or more data alignment, feature matching, and/or other data mapping techniques.)
for one or both of a predetermined number and a predetermined constellation of the one or more respective sensor/modality-specific data buffers, and otherwise is an unvalidated object; (See Levinson [column 39, lines 9-lines 28] “Yes,” a difference in the various parameters being compared exists (and/or meets or exceeds some threshold). For example, the perception system 116 may determine at 712 that, “Yes,” a first classification of an object (e.g., a first parameter) associated with the first group of objects 128 is different from a fourth classification of the object (e.g., a fourth parameter) associated with the fourth group of objects 120, and the system may proceed to 714 and/or to 716.);
disabling the perceived object from being considered in safety-critical threat assessment comprising emergency steering operations at least partially on a road surface upon which the road-bound vehicle is located when the perceived object is determined to be an unvalidated object(See Levinson column 16,lines 46-65] such a response and/or other action 126 may include, among other things, at least one of ignoring a portion of the image data 110); and enabling the perceived object to be considered in the safety-critical threat assessment when the perceived object is determined to be a validated object. (See Levinson [column 29, lines 25-30] The object detection system 118 of the perception system 116 may be configured to, for example, determine whether any differences exist between the groups of objects 502, 508, 514, and/or between any parameters associated with the groups of objects 502, 508, 514.), column 14 ,figure 7 , [column 39,line 8-line 27] Levinson does not expressly teaches its enabling, disabling the perceived object however, Levinson teaches identifying an error associated with data included in the first signal and/or the second signal).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6-7,14-15 are rejected under 35 U.S.C. 103 as being unpatented over US10964349B2 to Levinson (herein after “Levinson”) in view of US 20190294889 A1 to Sriram et al. (herein after “Sriram”).
Regarding claim 6, Levinson remains applied as Claim 1. However, Levinson does not expressly disclose or otherwise teach wherein the matching criteria comprises fulfilling predeterminable overlap criteria. Nevertheless, Sriram same field of endeavor teaches wherein the matching criteria comprises fulfilling predeterminable overlap criteria (See Sriram paras[0115] –[0116] determining an amount of intersection, overlap, and/or proximity between the region).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Levinson’s method of detecting errors in sensor data with Sriram’s overlapping criteria (region overlap) in order to allow to larger fields of view due to wider viewing angles and longer viewing distances, and greater per-camera coverage of vehicles and people on the ground (See Sriram para[0002]).
Regarding claim 7, Levinson remains applied as Claim 1. However, Levinson does not expressly disclose or otherwise teach wherein the predeterminable overlap criteria stipulates conditions for one or more of object class overlap, object region overlap and object state overlap. Nevertheless, Sriram same field of endeavor teaches wherein the predeterminable overlap criteria stipulates conditions for one or more of object class overlap, object region overlap and object state overlap. (See para[0037] An amount of intersection, overlap, and/or proximity between the region of the field of view and a region of interest (ROI) of the field of view that corresponds to the designated space may be determined. For example, the ROI may be represented using a line and the amount of intersection, overlap, and/or proximity may be the length of the line that falls within the region. The amount of intersection, overlap, and/or proximity may be used to determine the occupancy status of the designated space, claim 2)
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Levinson’s method of detecting errors in sensor data with Sriram’s overlapping criteria (region overlap) in order to allow to larger fields of view due to wider viewing angles and longer viewing distances, and greater per-camera coverage of vehicles and people on the ground (See Sriram para[0002]).
Regarding claim 14, Levinson remains applied as Claim 9. However, Levinson does not expressly disclose or otherwise wherein the matching criteria comprises fulfilling predeterminable overlap criteria. Nevertheless, Sriram same field of endeavor teaches wherein the matching criteria comprises fulfilling predeterminable overlap criteria. (See Sriram paras[0115] –[0116] determining an amount of intersection, overlap, and/or proximity between the region)
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Levinson’s method of detecting errors in sensor data with Sriram’s overlapping criteria (region overlap) in order to allow to larger fields of view due to wider viewing angles and longer viewing distances, and greater per-camera coverage of vehicles and people on the ground (See Sriram para[0002]).
Regarding claim 15, Levinson remains applied as Claim 9. However, Levinson does not expressly disclose or otherwise teach wherein the predeterminable overlap criteria stipulates conditions for one or more of object class overlap, object region overlap and object state overlap. Nevertheless, Sriram same field of endeavor teaches wherein the predeterminable overlap criteria stipulates conditions for one or more of object class overlap, object region overlap and object state overlap (See Sriram para[0037] An amount of intersection, overlap, and/or proximity between the region of the field of view and a region of interest (ROI) of the field of view that corresponds to the designated space may be determined. For example, the ROI may be represented using a line and the amount of intersection, overlap, and/or proximity may be the length of the line that falls within the region. The amount of intersection, overlap, and/or proximity may be used to determine the occupancy status of the designated space, claim 2).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Levinson’s method of detecting errors in sensor data with Sriram’s overlapping criteria (region overlap) in order to allow to larger fields of view due to wider viewing angles and longer viewing distances, and greater per-camera coverage of vehicles and people on the ground (See Sriram para[0002]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAZIA AFRIN whose telephone number is (703)756-1175. The examiner can normally be reached Monday-Friday 7:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A Browne can be reached at 5712700151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAZIA AFRIN/ Examiner, Art Unit 3666
/SCOTT A BROWNE/ Supervisory Patent Examiner, Art Unit 3666