Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
Figure 2 is objected to as depicting a block diagram without “readily identifiable” descriptors of each block, as required by 37 CFR 1.84(n). Rule 84(n) requires “labeled representations” of graphical symbols, such as blocks; and any that are “not universally recognized may be used, subject to approval by the Office, if they are not likely to be confused with existing conventional symbols, and if they are readily identifiable.” In the case of figure 2, the blocks are not readily identifiable per se and therefore require the insertion of text that identifies the function of that block. That is, each vacant block should be provided with a corresponding label identifying its function or purpose.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 24-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chattopadhyay et al. (US20190285752, hereinafter “Chattopadhyay”)
Claim 24 (new). Chattopadhyay teaches A method for providing, by a data processing system, a trained algorithm for detecting one or several objects in a point cloud that represents a scene and assigning to each detected object an object type chosen from a set of one or several predefined types ([0070] “perform object recognition and/or tracking of detected objects,…Perception engine 1238 may perform object recognition from sensor data inputs using deep learning, such as through one or more convolutional neural networks and other machine learning models 1256.”) and a list ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),”) of bounding boxes (BBOX), ([0035] “Each scan module 104 may output information associated with detected objects … the contours may be represented by boundary information, such as coordinates of 3D boundary boxes.”) the method comprising: receiving input training data, the input training data including a plurality of point clouds, ([0064] “may collect sensor data from a variety of devices in one or more locations and utilize this data to build and/or train machine-learning models which may be used at the cloud-based system”) each representing a scene ([0020] “LiDAR sensors are deployed for acquiring high density 3D spatial data of a scene”) with one or several objects; ([0019] “when scanned by a 3D capture device, a field of view may be represented as a point cloud having points positioned at the respective surfaces of the various objects.”) receiving output training data, the output training data identifying, for each of the point clouds of the input training data, at least one object of the scene, ([0031] “An object detection algorithm (e.g., process) may identify clusters in the 3D spatial data (e.g., point cloud data) corresponding to different objects in the field of view of the 3D capture device.”) and for each identified object, associating with the at least one object a type of object chosen from the set of one or several predefined types ([0067] “utilize the trained machine learning models 1256 to derive various … classifications” and [0070] “to detect objects, such as other vehicles, pedestrians, wildlife, cyclists, etc. moving within an environment,”) and a list ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),”) of BBOXes, ([0035] “Each scan module 104 may output information associated with detected objects … the contours may be represented by boundary information, such as coordinates of 3D boundary boxes.”) wherein each BBOX of the BBOX list defines a spatial location within the point cloud ([0035] “coordinates of 3D boundary boxes.”) including a set of points representing the object or a part of the object; ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),… unified list of objects may include information indicating the location and contours of each detected object.”) training an algorithm based on the input training data ([0068] “a data collection module 1234 may be provided with logic to determine sources from which data is to be collected (e.g., for inputs in the training or use of various machine learning models 1256 used by the vehicle).”) and the output training data to form a trained algorithm; ([0081] “In supervised learning, the model may be built using a training set of data that contains both the inputs and corresponding desired outputs.”) and providing the resulting trained algorithm. ([0067] “utilize the trained machine learning models 1256 to derive various inferences, predictions, classifications, and other results.”)
Claim 25 (new). Chattopadhyay teaches The method according to claim 24, wherein the input training data includes a plurality of point clouds that each represent a different scene. ([0064] “may collect sensor data from a variety of devices in one or more locations and utilize this data to build and/or train machine-learning models which may be used at the cloud-based system”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 16, 21-23, 26 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Chattopadhyay et al. (US20190285752, hereinafter “Chattopadhyay”) and in view of Xu et al (US9633483, hereinafter “Xu”)
Claim 16 (new). Chattopadhyay teaches A method for processing a point cloud, the method comprising: acquiring or receiving a point cloud representing a scene ([0020] “LiDAR sensors are deployed for acquiring high density 3D spatial data of a scene”) with one or several objects; ([0019] “when scanned by a 3D capture device, a field of view may be represented as a point cloud having points positioned at the respective surfaces of the various objects.”)
using an object detection algorithm ([0023] “object detection algorithm”) for detecting the one or several objects in the point cloud, ([0020] “LiDAR sensors are deployed for acquiring high density 3D spatial data of a scene for object detection”) the ODA being configured for outputting, for each object detected in the point cloud, an object type ([0031] “An object detection algorithm (e.g., process) may identify clusters in the 3D spatial data (e.g., point cloud data) corresponding to different objects in the field of view of the 3D capture device.” And [0070] “to detect objects, such as other vehicles, pedestrians, wildlife, cyclists, etc. moving within an environment,” ) and a list ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),”) of bounding boxes (BBOX), ([0035] “Each scan module 104 may output information associated with detected objects … the contours may be represented by boundary information, such as coordinates of 3D boundary boxes.”) wherein the object type is chosen from a set of one or several predefined object types ([0070] “to detect objects, such as other vehicles, pedestrians, wildlife, cyclists,” and [0061]) that the ODA has been trained to identify, ([0064] “may collect sensor data from a variety of devices in one or more locations and utilize this data to build and/or train machine-learning models which may be used at the cloud-based system”) wherein each BBOX of the BBOX list defines a spatial location within the point cloud including a set of points ([0035] “coordinates of 3D boundary boxes.”) representing the detected object or a part of the detected object; ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),… unified list of objects may include information indicating the location and contours of each detected object.”)
for each of the predefined object types that was outputted, automatically creating a first list of all BBOXes that have been outputted together ([0059] “executes an object detection algorithm on the first data segment to generate a first list of objects”) with the predefined type; ([0067] “utilize the trained machine learning models 1256 to derive various … classifications” is understood to be the same as the claimed predefined type in light of instant specifications [0034])
for each BBOX outputted by the ODA, automatically creating a second list of all predefined object types that have been outputted for a detected object whose associated BBOX list comprises the BBOX; ([0058] “During the third stage of processing object lists 6 and 8 are combined to produce object list 9 (e.g., unified objects list 108).” Figure 9 shows all object lists are combined to form object list 9 which contains all predefined object types)
Chattopadhyay does not explicitly teach and using at least one of the first or second lists for automatically filtering the point cloud.
Xu teaches and using at least one of the first or second lists (col7 line63 “objects are classified 314 into one of the pre-defined classes (e.g., pedestrian or car)”) for automatically filtering the point cloud. (col7line66 “using an input image 320 to generate blobs which are annotated 322 and from which features are extracted 324…Thus, the system efficiently reduces and segments a 3D point cloud for object cueing”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Chattopadhyay to have using lists to filter the point cloud as taught by Xu to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Xu et al Col8 line1 “efficiently reduces and segments a 3D point cloud for object cueing in a pipeline and uses outputs for effective 3D scene analysis.”)
Claim 21 (new). Chattopadhyay and Xu teach The method according to claim 16,
Chattopadhyay teaches wherein the ODA is a trained algorithm ([0070] “perform object recognition and/or tracking of detected objects,…Perception engine 1238 may perform object recognition from sensor data inputs using deep learning, such as through one or more convolutional neural networks and other machine learning models 1256.”) configured for receiving, as input, a point cloud, ([0070] “take as inputs various sensor data (e.g., 1258) including data,” and [0066] “sensor data (e.g., camera image data, LIDAR point clouds, etc.),”) and for automatically detecting or identifying one or several sets of points within the received point cloud matching at least one of a spatial configuration or distribution of objects or parts of objects that the ODA has been trained to detect, ([0064] “may collect sensor data from a variety of devices in one or more locations and utilize this data to build and/or train machine-learning models which may be used at the cloud-based system”) wherein each of the objects belongs to one of the predefined object types, ([0067] “utilize the trained machine learning models 1256 to derive various … classifications” is understood to be the same as the claimed predefined type in light of instant specifications [0034]) for mapping each of the sets of points to a BBOX, ([0035] “coordinates of 3D boundary boxes.”) and for outputting, for each detected object, the type of the object and the BBOX list . ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),… unified list of objects may include information indicating the location and contours of each detected object.”)
Claim 22 (new). Chattopadhyay and Xu teach The method according to claim 21,
Chattopadhyay teaches wherein the ODA is configured or trained for combining several of the identified sets of points for determining the type of object, ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),… unified list of objects may include information indicating the location and contours of each detected object.”) the BBOX list being configured for listing the BBOXes whose associated set of points is part of the combination. ([0059] “executes an object detection algorithm on the first data segment to generate a first list of objects”)
Claim 23 (new). Chattopadhyay and Xu teach The method according to claim 16, further comprising,
Chattopadhyay teaches in addition to acquiring or receiving the point cloud, acquiring or receiving one or several images of the scene, ([0020] “LiDAR sensors are deployed for acquiring high density 3D spatial data of a scene”) and using the one or several images together with the point cloud as input to the ODA for detecting the one or several objects. ([0019] “when scanned by a 3D capture device, a field of view may be represented as a point cloud having points positioned at the respective surfaces of the various objects.”)
Claim 26 (new). Chattopadhyay teaches A data processing system comprising: a processor; and
an accessible memory, ([0095] “ a processing element may include other elements on a chip with processor 1500. For example, a processing element may include memory control logic along with processor 1500.”) the data processing system being configured to:
acquire or receive a point cloud representing a scene; ([0020] “LiDAR sensors are deployed for acquiring high density 3D spatial data of a scene”)
use an object detection algorithm ([0023] “object detection algorithm”) for detecting, in the point cloud, one or several objects of the scene, ([0020] “LiDAR sensors are deployed for acquiring high density 3D spatial data of a scene for object detection”) the ODA being configured for outputting, for each detected object, an object type ([0031] “An object detection algorithm (e.g., process) may identify clusters in the 3D spatial data (e.g., point cloud data) corresponding to different objects in the field of view of the 3D capture device.” And [0070] “to detect objects, such as other vehicles, pedestrians, wildlife, cyclists, etc. moving within an environment,” ) selected from a set of one or several predefined object types ([0070] “to detect objects, such as other vehicles, pedestrians, wildlife, cyclists,” and [0061]) and a list ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),”) of bounding boxes (BBOX), ([0035] “Each scan module 104 may output information associated with detected objects … the contours may be represented by boundary information, such as coordinates of 3D boundary boxes.”) wherein each BBOX of the list is configured for defining a spatial location within the point cloud comprising a set of points ([0035] “coordinates of 3D boundary boxes.”) representing the detected object or a part of the object; ([0036] “generates a unified list of objects present in the 3D spatial data (e.g., LiDAR point cloud data),… unified list of objects may include information indicating the location and contours of each detected object.”)
for each of the predefined object types that was outputted,
automatically create a first list of all BBOXes that have been outputted together ([0059] “executes an object detection algorithm on the first data segment to generate a first list of objects”) with the predefined type; ([0067] “utilize the trained machine learning models 1256 to derive various … classifications” is understood to be the same as the claimed predefined type in light of instant specifications [0034]) for each BBOX outputted by the ODA, automatically create a second list of all predefined object types that have been outputted for a detected object whose associated BBOX list comprises the respective BBOX ; ([0058] “During the third stage of processing object lists 6 and 8 are combined to produce object list 9 (e.g., unified objects list 108).” Figure 9 shows all object lists are combined to form object list 9 which contains all predefined object types)
Chattopadhyay does not explicitly teach use at least one of the first or second lists for automatically filtering the point cloud.
Xu teaches use at least one of the first or second lists (col7 line63 “objects are classified 314 into one of the pre-defined classes (e.g., pedestrian or car)”) for automatically filtering the point cloud. (col7line66 “using an input image 320 to generate blobs which are annotated 322 and from which features are extracted 324…Thus, the system efficiently reduces and segments a 3D point cloud for object cueing”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Chattopadhyay to have using lists to filter the point cloud as taught by Xu to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Xu et al Col8 line1 “efficiently reduces and segments a 3D point cloud for object cueing in a pipeline and uses outputs for effective 3D scene analysis.”)
Claim 29 (new). The non-transitory computer-readable medium herein has been executed and performed by the system of claim 26 and is likewise rejected
Claims 17, 27 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Chattopadhyay et al. (US20190285752, hereinafter “Chattopadhyay”) and in view of Xu et al (US9633483, hereinafter “Xu”) and in view of Neumann et al (US20140098094, hereinafter “Neumann”)
Claim 17 (new). Chattopadhyay and Xu teach The method according to claim 16, which comprises,
Chattopadhyay and Xu do not explicitly teach upon a selection of a position within a displayed image created from the point cloud, automatically determining to which BBOX the position belongs, and automatically displaying the second list of predefined object types listed for the respective BBOX.
Neumann teaches upon a selection of a position within a displayed image created from the point cloud, ([0064] “The Point Part Editor and Importer (240) provides the interactive tools needed for selecting regions of points within a Point Cloud (100) or Point Cloud Clusters (140).”) automatically determining to which BBOX the position belongs, ([0025] “Each cluster of points has an associated bounding box.”) and automatically displaying the second list of predefined object types listed for the respective BBOX. ([0018] “all the part attributes in the part library are included with the output model.”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Chattopadhyay and Xu to have selecting a point in the point cloud and determining which bounding box it belongs to and displaying a list of predefined object types listed for the bounding box as taught by Neumann to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Neumann et al [0019] “allows users to interactively isolate regions of a point cloud and store them in the matching database for object matching.”)
Claim 27 (new). Chattopadhyay and Xu teach The data processing system according to claim 26, wherein,
Chattopadhyay and Xu do not explicitly teach upon a selection of a position within a displayed image created from the point cloud, the processor is configured to automatically determine to which BBOX the position belongs to, and to automatically display the second list of predefined object types listed for the respective BBOX .
Neumann teaches upon a selection of a position within a displayed image created from the point cloud, ([0064] “The Point Part Editor and Importer (240) provides the interactive tools needed for selecting regions of points within a Point Cloud (100) or Point Cloud Clusters (140).”) the processor is configured to automatically determine to which BBOX the position belongs to, ([0025] “Each cluster of points has an associated bounding box.”) and to automatically display the second list of predefined object types listed for the respective BBOX . ([0018] “all the part attributes in the part library are included with the output model.”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Chattopadhyay and Xu to have selecting a point in the point cloud and determining which bounding box it belongs to and displaying a list of predefined object types listed for the bounding box as taught by Neumann to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Neumann et al [0019] “allows users to interactively isolate regions of a point cloud and store them in the matching database for object matching.”)
Claim 30 (new). The non-transitory computer-readable medium herein has been executed and performed by the system of claim 27 and is likewise rejected
Claims 18-20, 28 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Chattopadhyay et al. (US20190285752, hereinafter “Chattopadhyay”) and in view of Xu et al (US9633483, hereinafter “Xu”) and in view of Bauer et al (US20230011818, hereinafter “Bauer”)
Claim 18 (new). Chattopadhyay and Xu teach The method according to claim 16, wherein,
Chattopadhyay and Xu do not explicitly teach upon selection of one of the predefined object types of the second list of predefined object types, the filtering comprises automatically displaying or hiding only those points of the set or sets of points associated with the BBOXes of the first list created for the predefined object type that has been selected.
Bauer teaches upon selection of one of the predefined object types of the second list of predefined object types, ([0025] “selecting a CAD object in the catalog that corresponds to the item;”) the filtering comprises automatically displaying or hiding only those points of the set or sets of points ([0025] “aligning, by the processing device, the CAD object to the item in the point cloud; and outputting a position and orientation of the aligned CAD object,”) associated with the BBOXes of the first list created for the predefined object type that has been selected. ([0025] “fitting the CAD model into the 3D box”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Chattopadhyay and Xu to have selecting an object type from a catalog and matching only those points corresponding with the object associated with the bounding boxes for the predefined object as taught by Bauer to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Bauer et al [0070] “improvement to CAD object detection because conventional automated techniques cannot operate effectively, accurately,”)
Claim 19 (new). Chattopadhyay and Xu teach The method according to claim 16,
Chattopadhyay and Xu do not explicitly teach further comprising providing the filtered point cloud via an interface.
Bauer teaches further comprising providing the filtered point cloud via an interface. ([0060] “a user interface of the conversion software can include output similar to block 114 with CAD objects overlaid on a point cloud and pages (e.g., database entries) from a CAD object catalog for the user to select from. The points in the point cloud that the CAD object is overlaying can be saved or they can be removed.”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Chattopadhyay and Xu to have displaying a filtered point cloud via an interface as taught by Bauer to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Bauer et al [0070] “improvement to CAD object detection because conventional automated techniques cannot operate effectively, accurately,”)
Claim 20 (new). Chattopadhyay, Xu and Bauer teach The method according to claim 19, further comprising
Chattopadhyay and Xu do not explicitly teach using the filtered point cloud for visualization on a screen.
Bauer teaches using the filtered point cloud for visualization on a screen. ([0116] “The user interface may be one or more LEDs (light-emitting diodes) 82, an LCD (liquid-crystal diode) display, a CRT (cathode ray tube) display, a touch-screen display or the like.”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Chattopadhyay and Xu to have displaying a filtered point cloud on a screen as taught by Bauer to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Bauer et al [0070] “improvement to CAD object detection because conventional automated techniques cannot operate effectively, accurately,”)
Claim 28 (new). Chattopadhyay and Xu teach The data processing system according to claim 26, wherein,
Chattopadhyay and Xu do not explicitly teach upon a selection of one of the predefined object types of the second list of predefined object types, displaying or hiding only the points of the sets of points associated with the BBOXes of the first list created for the predefined object type that has been selected.
Bauer teaches upon a selection of one of the predefined object types of the second list of predefined object types, , ([0025] “selecting a CAD object in the catalog that corresponds to the item;”) displaying or hiding only the points of the sets of points ([0025] “aligning, by the processing device, the CAD object to the item in the point cloud; and outputting a position and orientation of the aligned CAD object,”) associated with the BBOXes of the first list created for the predefined object type that has been selected. ([0025] “fitting the CAD model into the 3D box”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Chattopadhyay and Xu to have selecting an object type from a catalog and matching only those points corresponding with the object associated with the bounding boxes for the predefined object as taught by Bauer to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been because it (Bauer et al [0070] “improvement to CAD object detection because conventional automated techniques cannot operate effectively, accurately,”)
Claim 31 (new). The non-transitory computer-readable medium herein has been executed and performed by the system of claim 28 and is likewise rejected
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
Lai et al US20220092291 teaches a receiving 3D point cloud data and a bounding box that envelopes the object and an interface to allow a user to properly label the bounding boxes for training a machine learning model
Iancu et al US20210232871 teaches putting a bounding box around a cluster of points belonging to a single object and generating a list of all features
Huang et al US20220188554 teaches estimating a 3D bounding box around each object of interest within a point cloud and annotating the box.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OWAIS MEMON whose telephone number is (571)272-2168. The examiner can normally be reached M-F (7:00am - 4:00pm) CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OWAIS I MEMON/Examiner, Art Unit 2663