DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 02/04/2026 is being entered. Claims 1, 3, 5, and 7 are amended. Claims 1-7 are pending, and rejected as detailed below. This action is final as necessitated by amendment.
Response to Arguments
Claim Rejections under 35 U.S.C. §102 and §103
Applicant argues that Neither Wheeler nor Yuka disclose or suggest at least presenting a map indicating "an amount of protrusion of an object in the point cloud toward a road side." Therefore, Applicant requests that the rejection of claim 1, and the claims that depend from claim 1, be withdrawn. Claim 7 has similar limitations and should be allowable for at least similar reasons.
Applicant’s arguments, as amended herein, with respect to the rejections of claims 1 and 7 under 35 U.S.C. §102 have been fully considered and not persuasive as Wheeler teaches about traffic light extending into the roadside (Wheeler; 0073 and FIG. 7), a tree growing beyond a reasonable tolerance (ex; extending into the road side thus blocking traffic) and map being updated with the corresponding amount of changes (Wheeler; 01080), and describe scope of changes and the size of the change (Wheeler; 0155). Furthermore, dependent claims 2-6 are rejected based on the rejection of claim 1. In particular, the amendments to claims 1 and 7 are addressed in the instant office action.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1 and 7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Applicant has mentioned “the structure, and an amount of protrusion of an object in the point cloud toward a road side” in claim 1 and claim 7. However, Applicant fails to provide any written description for “the structure, and an amount of protrusion of an object in the point cloud toward a road side”.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2 and 4-7 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wheeler (US 20180188045 A1).
Regarding claim 1, Wheeler teaches (currently amended) A structure investigation assistance system (Wheeler, at least one para. 0002; “This disclosure relates generally to maps for autonomous vehicles, and more particularly to updating high definition maps based on sensor data collected by to autonomous vehicles.”) comprising:
a memory and at least one processor coupled to the memory the at least one processor performing operations to (Wheeler, at least one para. 0167; “The example computer system 1600 includes a processor 1602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any combination of these), a main memory 1604, and a static memory 1606, which are configured to communicate with each other via a bus 1608. ”):
store image data of a road captured continuously by a camera (Wheeler, at least one para. 0079; “The vehicle 150 receives 902 sensor data from the vehicle sensors 105 concurrently with the vehicle 150 traversing along a route. As described previously, the sensor data (e.g., the sensor data 230) includes, among others, image data, location data, vehicle motion data, and LIDAR scanner data.”);
store detailed information about a structure around the road in association with position information about the structure on a map, the detailed information on the structure being configured using point cloud data obtained by 3D scanning the structure (Wheeler, at least one para. 0080; “The vehicle 150 processes 904 the sensor data to determine a current location of the vehicle 150, and detects a set of objects (e.g., landmarks) from the sensor data. For example, the current location may be determined from the GPS location data. The set of objects may be detected from the image data and the LIDAR scanner data. In various embodiments, the vehicle 150 detects the objects in a predetermined region surrounding the vehicle's current location. For each determined object, the vehicle 150 may also determine information associated with the object such as a distance of the object away from the current location, a location of the object, a geometric shape of the object, and the like.”);
select, from the stored image data, image data targeted for an investigation request in a case where a request for an investigation of the structure is received (Wheeler, at least one para. 0137; “The map data request module 1330 selects a vehicle for requesting additional map data for specific location and send a request to the vehicle. The map data request module 1330 sends a request via the vehicle interface module 160 and also receives additional map data via the vehicle interface module 160.”);
select, from the stored detailed information, detailed information about the structure targeted for the investigation request (Wheeler, at least one para. 0081; “The vehicle 150 obtains 906 a set of represented objects (e.g., landmarks represented on the LMap) based on the current location of the vehicle. For example, the vehicle 150 queries its current location in the HD map data stored in the local HD map store 275 on the vehicle to find the set of represented objects located within a predetermined region surrounding the vehicle's current location. The HD map data stored in the on-vehicle or local HD map store 275 corresponds to a geographic region and includes landmark map data with representations of landmark objects in the geographic region. ”); and
present a map indicating a location of the structure (Wheeler, at least one para. 0081; “The HD map data stored in the on-vehicle or local HD map store 275 corresponds to a geographic region and includes landmark map data with representations of landmark objects in the geographic region. The representations of landmark objects include locations such as latitude and longitude coordinates of the represented landmark objects.”), the image data selected (Wheeler, at least one para. 0086; “the vehicle 150 processes the image data of the of the traffic sign to detect what is displayed on it, and processes the wireless signals from the sign to obtain wirelessly-communicated information from the sign, and compares these, and responds based on whether there is a match.”), the detailed information about the structure (Wheeler, at least one para. 0084; “The match record includes the current location of the vehicle 150 and a current timestamp. The match record may also include information about the verified represented object, such as an object ID identifying the verified represented object that is used in the existing landmark map stored in the HD map system HD map store 165. The object ID may be obtained from the local HD map store 275. The match record may further include other information about the vehicle (e.g., a particular make and model, vehicle ID, a current direction (e.g. relative to north), a current speed, a current motion, etc.) A match record may also include the version (e.g., a version ID) of the HD map that is stored in the local HD map store 275.”), and an amount of protrusion of an object in the point cloud toward a road side (Wheeler, at least one para. 0108; “In various embodiments, the online HD map system 110 verifies the existing occupancy maps and updates the existing occupancy maps. If an object (e.g., a tree, a wall, a barrier, a road surface) moves, appears, or disappears, then the occupancy map is updated to reflect the changes. For example, if a hole appears in a road, a hole has been filled, a tree is cut down, a tree grows beyond a reasonable tolerance, etc, then the occupancy map is updated. If an object's appearance changes, then the occupancy map is updated to reflect the changes.”, it is inherent that a reasonable tolerance of a tree allows vehicle traffic to move freely. As a result, it is obvious that a tree grows beyond a reasonable tolerance can be seen as tree branches growing into the road side thus disrupting the free flowing of vehicular traffic or pedestrian traffic.) and (Wheeler, at least one para. 0073; “FIG. 7 illustrates lane representations in an HD map, according to an embodiment. FIG. 7 shows a vehicle 710 at a traffic intersection. The HD map system provides the vehicle with access to the map data that is relevant for autonomous driving of the vehicle. This includes, for example, features 720a and 720b that are associated with the lane but may not be the closest features to the vehicle.”, shows the lateral positioning of traffic lights within the road side) and (Wheeler, at least one para. 0155; “if lane elements were traversed (i.e., driven over existing region in the map) the message includes a list of traversed lane element IDs, information describing a scope of change if any (what type of change and how big), a change fingerprint (to help identify duplicate changes), and a size of the change packet.”, teaches the scope of the change and size/amount of the change).
Regarding claim 2, Wheeler teaches (previously presented) The structure investigation assistance system according to claim 1, wherein the at least one processor further performs operation to: select, as the image data targeted for the investigation request, from the stored image data (Wheeler, at least one para. 0084; “The vehicle 150 creates 912 a match record. A match record is a type of a landmark map verification record. A match record corresponds to a particular represented object in the landmark map stored in the local HD map store 275 that is determined to match an object detected by the vehicle, which can be referred to as a verified represented object.”), image data in a course of that an investigation implementer is heading for the structure from a current location (Wheeler, at least one para. 0086; “The vehicle can compare the traffic light or traffic sign with a live traffic signal feed from the V2X system to determine if there is a match. If there is not a match with the live traffic signal feed, then the vehicle may adjust how it responds to this landmark. In some cases, the information sent from the object (e.g., traffic light, traffic sign) may be dynamically controlled, for example, based on various factors such as traffic condition, road condition, or weather condition. For example, the vehicle 150 processes the image data of the of the traffic sign to detect what is displayed on it, and processes the wireless signals from the sign to obtain wirelessly-communicated information from the sign, and compares these, and responds based on whether there is a match. If the displayed information does not match the wirelessly-communicated information, the vehicle 150 determines that the verification failed and disregards the information when determining what actions to take.”, since the traffic lights are positioned in front of the vehicle, it is inherent that the vehicle is heading towards the traffic lights).
Regarding claim 4, Wheeler teaches (previously presented) The structure investigation assistance system according to claim 1, wherein the at least one processor further performs operation to: present point cloud data of the structure (Wheeler, at least one para. 0119; “The vehicle 150 compares 1116 the updated occupancy map to the existing occupancy map (i.e., the occupancy map stored in the local HD map store 275) to identify one or more discrepancies. The updated occupancy map includes 3D representations of objects in the environment surrounding the vehicle 150 detected from the sensor data.”) when viewed in a direction same as a direction of image data selected (Wheeler, at least one para. 0086; “the vehicle 150 processes the image data of the of the traffic sign to detect what is displayed on it, and processes the wireless signals from the sign to obtain wirelessly-communicated information from the sign, and compares these, and responds based on whether there is a match.”, since the traffic lights are positioned in front of the vehicle, it is inherent that image data is received from a front view camera).
Regarding claim 5, Wheeler teaches (currently amended) The structure investigation assistance system according to claim 1, wherein the at least one processor further performs operation to: store design drawing data of the structure (Wheeler, at least one para. 0080; “For each determined object, the vehicle 150 may also determine information associated with the object such as a distance of the object away from the current location, a location of the object, a geometric shape of the object, and the like.”, wherein the geometric shape can be a design drawing data); and present the design drawing data of the structure (Wheeler, at least one para. 0082; “the vehicle 150 compares the geometric shape of the detected object (e.g., a hexagonal stop sign) to the geometric shape of the object on the map.”) in response to a request from a user (Wheeler, at least one para. 0137; “The map data request module 1330 selects a vehicle for requesting additional map data for specific location and send a request to the vehicle. The map data request module 1330 sends a request via the vehicle interface module 160 and also receives additional map data via the vehicle interface module 160. [0105] The human operator can provide to the HD map system 110 with instructions on whether the landmark object as represented in the HD map 110 should be updated or is accurate.”).
Regarding claim 6, Wheeler teaches (previously presented) A structure investigation assistance method (Wheeler, at least one para. 0002; “This disclosure relates generally to maps for autonomous vehicles, and more particularly to updating high definition maps based on sensor data collected by to autonomous vehicles.”) comprising:
selecting, from a first storage unit image data targeted for an investigation request (Wheeler, at least one para. 0137; “The map data request module 1330 selects a vehicle for requesting additional map data for specific location and send a request to the vehicle. The map data request module 1330 sends a request via the vehicle interface module 160 and also receives additional map data via the vehicle interface module 160.”), the first storage unit receiving a request for an investigation of a structure and storing image data of a road continuously captured by a camera (Wheeler, at least one para. 0079; “The vehicle 150 receives 902 sensor data from the vehicle sensors 105 concurrently with the vehicle 150 traversing along a route. As described previously, the sensor data (e.g., the sensor data 230) includes, among others, image data, location data, vehicle motion data, and LIDAR scanner data.”);
selecting, from a second storage unit detailed information about a structure targeted for the investigation request, the second storage unit storing detailed information in association with position information about the structure on a map (Wheeler, at least one para. 0081; “The vehicle 150 obtains 906 a set of represented objects (e.g., landmarks represented on the LMap) based on the current location of the vehicle. For example, the vehicle 150 queries its current location in the HD map data stored in the local HD map store 275 on the vehicle to find the set of represented objects located within a predetermined region surrounding the vehicle's current location. The HD map data stored in the on-vehicle or local HD map store 275 corresponds to a geographic region and includes landmark map data with representations of landmark objects in the geographic region.”), the detailed information being configured point cloud data obtained by 3D scanning the structure (Wheeler, at least one para. 0080; “The vehicle 150 processes 904 the sensor data to determine a current location of the vehicle 150, and detects a set of objects (e.g., landmarks) from the sensor data. For example, the current location may be determined from the GPS location data. The set of objects may be detected from the image data and the LIDAR scanner data.”); and
presenting a map indicating a location of the structure (Wheeler, at least one para. 0081; “The HD map data stored in the on-vehicle or local HD map store 275 corresponds to a geographic region and includes landmark map data with representations of landmark objects in the geographic region. The representations of landmark objects include locations such as latitude and longitude coordinates of the represented landmark objects.”), the selected image data (Wheeler, at least one para. 0086; “the vehicle 150 processes the image data of the of the traffic sign to detect what is displayed on it, and processes the wireless signals from the sign to obtain wirelessly-communicated information from the sign, and compares these, and responds based on whether there is a match.”), and the detailed information about the structure (Wheeler, at least one para. 0084; “The match record includes the current location of the vehicle 150 and a current timestamp. The match record may also include information about the verified represented object, such as an object ID identifying the verified represented object that is used in the existing landmark map stored in the HD map system HD map store 165. The object ID may be obtained from the local HD map store 275. The match record may further include other information about the vehicle (e.g., a particular make and model, vehicle ID, a current direction (e.g. relative to north), a current speed, a current motion, etc.) A match record may also include the version (e.g., a version ID) of the HD map that is stored in the local HD map store 275.”).
Regarding claim 7, Wheeler teaches (currently amended) A non-transitory computer-readable recording medium storing a program for causing a computer to execute (Wheeler, at least one para. 0165; “FIG. 16 is a block diagram illustrating components of an example machine able to read instructions from a machine-readable medium and execute them in a processor (or controller). Specifically, FIG. 16 shows a diagrammatic representation of a machine in the example form of a computer system 1600 within which instructions 1624 (e.g., software) for causing the machine to perform any one or more of the methodologies discussed herein may be executed. ”):
selecting, from a first storage image data targeted for an investigation request (Wheeler, at least one para. 0137; “The map data request module 1330 selects a vehicle for requesting additional map data for specific location and send a request to the vehicle. The map data request module 1330 sends a request via the vehicle interface module 160 and also receives additional map data via the vehicle interface module 160.”), the first storage receiving a request for an investigation of a structure and storing image data of a road continuously captured by a camera (Wheeler, at least one para. 0079; “The vehicle 150 receives 902 sensor data from the vehicle sensors 105 concurrently with the vehicle 150 traversing along a route. As described previously, the sensor data (e.g., the sensor data 230) includes, among others, image data, location data, vehicle motion data, and LIDAR scanner data.”);
selecting, from a second storage detailed information about a structure targeted for the investigation request, the second storage storing detailed information in association with position information about the structure on a map (Wheeler, at least one para. 0081; “The vehicle 150 obtains 906 a set of represented objects (e.g., landmarks represented on the LMap) based on the current location of the vehicle. For example, the vehicle 150 queries its current location in the HD map data stored in the local HD map store 275 on the vehicle to find the set of represented objects located within a predetermined region surrounding the vehicle's current location. The HD map data stored in the on-vehicle or local HD map store 275 corresponds to a geographic region and includes landmark map data with representations of landmark objects in the geographic region. ”), the detailed information being configured point cloud data obtained by 3D scanning the structure (Wheeler, at least one para. 0080; “The vehicle 150 processes 904 the sensor data to determine a current location of the vehicle 150, and detects a set of objects (e.g., landmarks) from the sensor data. For example, the current location may be determined from the GPS location data. The set of objects may be detected from the image data and the LIDAR scanner data.”); and
presenting a map indicating a location of the structure (Wheeler, at least one para. 0081; “The HD map data stored in the on-vehicle or local HD map store 275 corresponds to a geographic region and includes landmark map data with representations of landmark objects in the geographic region. The representations of landmark objects include locations such as latitude and longitude coordinates of the represented landmark objects.”), the selected image data (Wheeler, at least one para. 0086; “the vehicle 150 processes the image data of the of the traffic sign to detect what is displayed on it, and processes the wireless signals from the sign to obtain wirelessly-communicated information from the sign, and compares these, and responds based on whether there is a match.”), the detailed information about the structure (Wheeler, at least one para. 0084; “The match record includes the current location of the vehicle 150 and a current timestamp. The match record may also include information about the verified represented object, such as an object ID identifying the verified represented object that is used in the existing landmark map stored in the HD map system HD map store 165. The object ID may be obtained from the local HD map store 275. The match record may further include other information about the vehicle (e.g., a particular make and model, vehicle ID, a current direction (e.g. relative to north), a current speed, a current motion, etc.) A match record may also include the version (e.g., a version ID) of the HD map that is stored in the local HD map store 275.”), and an amount of protrusion of an object in the point cloud toward a road side (Wheeler, at least one para. 0108; “In various embodiments, the online HD map system 110 verifies the existing occupancy maps and updates the existing occupancy maps. If an object (e.g., a tree, a wall, a barrier, a road surface) moves, appears, or disappears, then the occupancy map is updated to reflect the changes. For example, if a hole appears in a road, a hole has been filled, a tree is cut down, a tree grows beyond a reasonable tolerance, etc, then the occupancy map is updated. If an object's appearance changes, then the occupancy map is updated to reflect the changes.”, it is inherent that a reasonable tolerance of a tree allows vehicle traffic to move freely. As a result, it is obvious that a tree grows beyond a reasonable tolerance can be seen as tree branches growing into the road side thus disrupting the free flowing of vehicular traffic or pedestrian traffic.) and (Wheeler, at least one para. 0073; “FIG. 7 illustrates lane representations in an HD map, according to an embodiment. FIG. 7 shows a vehicle 710 at a traffic intersection. The HD map system provides the vehicle with access to the map data that is relevant for autonomous driving of the vehicle. This includes, for example, features 720a and 720b that are associated with the lane but may not be the closest features to the vehicle.”, shows the lateral positioning of traffic lights within the road side) and (Wheeler, at least one para. 0155; “if lane elements were traversed (i.e., driven over existing region in the map) the message includes a list of traversed lane element IDs, information describing a scope of change if any (what type of change and how big), a change fingerprint (to help identify duplicate changes), and a size of the change packet.”, teaches the scope of the change and size/amount of the change).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Wheeler (US 20180188045 A1) as applied to claim 1 above, and further in view of KIHARA YUKA (JP 2017185578 A).
Regarding claim 3, Wheeler teaches (currently amended) The structure investigation assistance system according to claim 1 (Wheeler, at least one para. 0079-80; “The vehicle 150 receives 902 sensor data from the vehicle sensors 105 concurrently with the vehicle 150 traversing along a route. As described previously, the sensor data (e.g., the sensor data 230) includes, among others, image data, location data, vehicle motion data, and LIDAR scanner data.” and “The vehicle 150 processes 904 the sensor data to determine a current location of the vehicle 150, and detects a set of objects (e.g., landmarks) from the sensor data. For example, the current location may be determined from the GPS location data. The set of objects may be detected from the image data and the LIDAR scanner data. In various embodiments, the vehicle 150 detects the objects in a predetermined region surrounding the vehicle's current location. For each determined object, the vehicle 150 may also determine information associated with the object such as a distance of the object away from the current location, a location of the object, a geometric shape of the object, and the like.”), ; and present detailed information about the estimated structure (Wheeler, at least one para. 0081; “The HD map data stored in the on-vehicle or local HD map store 275 corresponds to a geographic region and includes landmark map data with representations of landmark objects in the geographic region.”).
Wheeler does not explicitly teach wherein the at least one processor further performs operation to: estimate an overall image of a structure hidden by overlapping of subjects in the image data by using the point cloud data;
KIHARA YUKA, in the same field of endeavor (KIHARA YUKA, translated copy page 2 and para. 3; “The grip control device 200 according to the present embodiment includes a grip control processing unit 210 and controls the arm member 300 and the hand member 310 based on image data captured by the image capturing device 400.”) teaches wherein the at least one processor further performs operation to: estimate an overall image of a structure hidden by overlapping of subjects in the image data (KIHARA YUKA, translated copy page 11 and para. 10; “Subsequently, the grip control processing unit 210C restores the shape of the object 123 from the object region 123A, and calculates the reliability of the restored shape. At this time, the image of the object 123 is overlapped with the image of the object 122 and the image of the object 121, and a part of the image is missing. Therefore, the shape of the object 123 that is partially missing is restored from the object region 123A, and the reliability is less than the reliability threshold”) by using the point cloud data (KIHARA YUKA, translated copy page 4 and para. 4; “The object area detection unit 231 detects an area where an object exists in the imaging area based on image data and distance information (point cloud data) included in the imaging data.”);
Wheeler and KIHARA YUKA are both considered to be analogous to the claimed invention because both of them are in the same field as processing image data to identify a structure as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the structure identification process of Wheeler with teaching of KIHARA YUKA. One of the ordinary skill in the art would have been motivated to make this modification so that the exact shape of the targeted structure can be identified by eliminating obstacles that are positioned in front of the targeted structure (KIHARA YUKA; page 12).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UPUL P CHANDRASIRI whose telephone number is (703)756-5823. The examiner can normally be reached M-F 8.30 am to 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at 571-272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/U.P.C./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665