DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on/after Mar. 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because it repeats the verbiage from the title nearly verbatim in the first sentence. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Response to Arguments
Applicant’s arguments, see pp. 9-12, filed 2, with respect to the rejection of claim 199 under 35 U have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Frueh (. The Examiner notes that the previously-cited CHEN reference is not relied upon in this Office action.
The Examiner notes that the newly-amended limitation “generating a first outline of the first 3D model” is now taught by FRUEH. Please see the Office action below for further rationale for the rejection.
The Examiner notes that the newly-amended limitation “augmenting the first 3D model with the second 3D model … based on the first/second outlines” is now taught by the combination of FRUEH and the previously-cited COHEN reference. Please see the Office action below for further rationale for the rejection of the newly-amended claim limitation(s).
Applicant’s representative (see pp. 11-12 of ‘REMARKS’) states that “the term ‘outline’ refers to a simplified representation of a model’s boundary or shape, often derived from a specific viewpoint such as a TOP-DOWN or PLANAR PROJECTION (emphasis added by the Examiner).” The Examiner notes that FIGS. 3A-3F of FRUEH explicitly depict the airborne model as a TOP-DOWN VIEW, which may also be regarded as a planar projection, as viewed orthogonally with respect to the ground plane.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 199-200, 203-216, and 219 are rejected under 35 U.S.C. 103 as being unpatentable over Frueh (U.S. Patent 8,466,915; ‘FRUEH’) in view of Cohen et al. ("Indoor-Outdoor 3D Reconstruction Alignment", pub. 2016, 'COHEN').
Regarding claim 199, FRUEH discloses a method of augmenting 3D models, the method comprising:
receiving … first … images; generating a first 3D model based on the first … images (FRUEH; Col. 4, Lines 50-63; “… two separate ground and airborne models are stored or received. … the initial ground [‘first 3D model’] and airborne models are both 3D models of spatial data. These can be 3D models including images of any type of object including … objects on a landscape or terrain, such as buildings … having a facade with an edge. … the initial ground model has images at a relatively higher resolution than the initial airborne model. … a ground model (also called a ground-based model) may be a model having many images obtained from a point view at or near a ground or street [‘receiving … first … images’] such as from … camera(s) on a vehicle or person traveling along or near the ground.”);
PNG
media_image1.png
297
478
media_image1.png
Greyscale
PNG
media_image2.png
315
503
media_image2.png
Greyscale
generating a first outline of the first 3D model (FRUEH; Col. 5, Lines 47-66; “FIGS. 3A-3F … illustrate … aligning ground and airborne models. FIG. 3A shows a top-down view of facades of a section 310 of an airborne model. The top edges of the facades are sampled at different points along the edges. FIG. 3B further shows a misalignment determined to be between corresponding ground model facades (represented with a solid line) and the facades of the airborne model. For a first facade, a set of consensus points 316 along edge 312A of the airborne model are determined. The consensus points 316 are those points on edge 312A determined to be within a predetermined distance of the ground facade edge 314A [‘outline of the first 3D model’]. … the endpoints of edge 312A, and with them all other edges connected at those endpoints, are moved until the consensus score of all affected edges combined is maximal. As shown by the arrow in FIG. 3C this results in a shifting of the ground model relative to the airborne model such that the edges 312A, 314A are better aligned. … when carrying out the aligning, first vertical edges are extracted from both the airborne 3D model and the ground facade model.”);
identifying … sides of the first 3D model (FRUEH; Col. 5, Lines 30-38; “… facades in the airborne model and ground model may each have a surface with a top and a bottom relative to a ground location, and at least two vertical edges extending between the top and bottom that define endpoints across the facade. The surface may represent a side of a building, … the top being where the building side meets the roof of the building and the bottom being where the building side meets a street … Vertical edges of the building side … represent edges of the building.”);
receiving … second … images; generating a second 3D model based on the second … images (FRUEH; Col. 4, Lines 63-67 ~ Col. 5, Lines 1-10; “An airborne model [‘generating a second 3D model’] (also called an airborne based model) may be a model having many images obtained [‘receiving … second … images’] from a point view located well above ground level such as an aerial view from … camera(s) on a plane or balloon traveling above the height of buildings. Ground models and airborne models can each be 3D models and may include 3D mesh information. … 3D model data may also include images, textures, and non-textured data. Ground 3D models may often include relatively high-resolution images of facades and optional depth or distance data, such as obtained with laser scanning. Airborne 3D models may often include relatively low-resolution images of facades.”);
generating a second outline of the second 3D model (FRUEH; Col. 5, Lines 47-65; “FIGS. 3A-3F … illustrate … aligning ground and airborne models. FIG. 3A shows a top-down view of facades of a section 310 of an airborne model. The top edges of the facades are sampled at different points along the edges. FIG. 3B further shows a misalignment determined to be between corresponding ground model facades (represented with a solid line) and the facades of the airborne model. For a first facade, a set of consensus points 316 along edge 312A of the airborne model [‘outline of the second 3D model’] are determined. The consensus points 316 are those points on edge 312A determined to be within a predetermined distance of the ground facade edge 314A. … the endpoints of edge 312A, and with them all other edges connected at those endpoints, are moved until the consensus score of all affected edges combined is maximal. As shown by the arrow in FIG. 3C this results in a shifting of the ground model relative to the airborne model such that the edges 312A, 314A are better aligned. … when carrying out the aligning, first vertical edges are extracted from both the airborne 3D model and the ground facade model.”);
identifying … second … sides of the second 3D model (FRUEH; Col. 5, Lines 30-38; “… facades in the airborne model and ground model may each have a surface with a top and a bottom relative to a ground location, and at least two vertical edges extending between the top and bottom that define endpoints across the facade. The surface may represent a side of a building, … the top being where the building side meets the roof of the building and the bottom being where the building side meets a street or sidewalk. Vertical edges of the building side may represent edges of the building.”); and
augmenting the first 3D model with the second 3D model (FRUEH; FIG. 2; Col. 7, Lines 18-31; “Step 206 involves merging the modified ground and airborne models to obtain a fused 3D model. This merging meshes the modified ground and airborne models to a single 3D model (including re-triangulation, closing holes, and/or repacking of texture). … with triangular primitive meshes, ground-based points and triangles are appended to the airborne points and triangles. Texture is repacked … to reclaim unused texture space pertaining to removed triangles. Any remaining holes are either closed by moving surrounding vertices towards each other, or by filling in new triangles.”).
FRUEH does not explicitly disclose substantially aligning the first … sides with the second … sides in a common coordinate system based on the first outline and the second outline, which COHEN discloses (COHEN; p. 290, §. 4: ‘Natural Frame Estimation’; “To simplify the subsequent steps of our procedure, we align each 3D model into a canonical coordinate system. We choose the coordinate system that is aligned to the façade directions of the building. … we determine the main axes of each model by estimating the vanishing points in each input image. The vanishing points then vote for the three coordinate directions. … we align the coordinate system of each 3D model with the xyz-axes, such that the vertical axis is aligned with z and walls are mostly aligned with the x or y direction under a Manhattan world assumption.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of augmenting 3D models of FRUEH to include the substantially aligning the first … sides with the second … sides in a common coordinate system based on the first outline and the second outline of COHEN. The motivation for this modification could have been to implement an alignment algorithm that exploits scene semantics to establish correspondences between indoor and outdoor models. The fact that the windows of a building can be seen both from the inside and the outside is exploited (COHEN; p. 286).
Regarding claim 200, FRUEH-COHEN disclose the method of claim 199, wherein each image of the first … images and the second … images comprise a building object (FRUEH; Col. 4, Lines 50-67; Col. 5, Lines 1, 30-46)
Regarding claim 203, FRUEH-COHEN disclose the method of claim 199, wherein augmenting the first 3D model with the second 3D model further comprises substantially aligning the first outline of the first 3D model with the second outline of the second 3D model (COHEN; FIG. 2; p. 288, § 3; “Given separate indoor and outdoor models, we propose to align the inside and outside of a building through semantic information. … as windows are visible both from inside and outside, we use window detections to generate correspondences between the two models, which are then used to compute the alignment between the models.”).
Regarding claim 204 and claim 207, FRUEH-COHEN disclose the method of claim 203, wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on … value(s) derived from … architectural element(s) (COHEN; FIG. 2; p. 288, § 3; “Given separate indoor and outdoor models, we propose to align the inside and outside of a building through semantic information. … as windows are visible both from inside and outside, we use window detections to generate correspondences between the two models, which are then used to compute the alignment between the models.”; p. 289, § 3: ‘Model Alignment’; “… to avoid the combinatorial growth in complexity, we exploit the width and height of the 3D window detections to estimate a similarity transformation from a single window correspondence. Using a single match allows us to … generate the set of all possible alignment configurations.”).
Regarding claim 205 and claim 206, FRUEH-COHEN disclose the method of claim 204, further comprising:
matching an architectural element of the first 3D model/images with a corresponding architectural element of the second 3D model/images (COHEN; FIG. 2; p. 288, § 3; “Given separate indoor/outdoor models, we propose to align the inside and outside of a building through semantic information. … as windows are visible both from inside and outside, we use window detections to generate correspondences between the two models, which are then used to compute the alignment between the models.” p. 289, § 3: ‘Window Detection’; “Leveraging the SfM points, we estimate 3D window positions for each image individually. We then detect overlapping 3D windows and compute consensus window positions.”); wherein
substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the matched architectural element (COHEN; p. 289, § 3: ‘Model Alignment’; “Given 3D window detections for the indoor and outdoor models, we next register the disjoint models based on window correspondences. Computing the alignment boils down to finding a similarity transformation between the models, which can be computed from three-point correspondences in the general case and from two-point matches if the gravity direction is known.”).
Regarding claim 208 and claim 209, FRUEH-COHEN disclose the method of claim 207, further comprising:
matching an architectural element of the first 3D model/images with a corresponding architectural element of the second 3D model/images (COHEN; FIG. 2; p. 288, § 3; “Given separate indoor/outdoor models, we … align the inside/outside of a building through semantic information. … as windows are visible both from inside and outside, we use window detections to generate correspondences between the two models, which are … used to compute the alignment between the models.” p. 289, § 3: ‘Window Detection’; “Leveraging the SfM points, we estimate 3D window positions for each image … We then detect overlapping 3D windows and compute consensus window positions.”);
substantially aligning the first/second 3D models … based on the matched architectural element (COHEN; p. 289, § 3: ‘Model Alignment’; “Given 3D window detections for the indoor and outdoor models, we next register the disjoint models based on window correspondences. Computing the alignment boils down to finding a similarity transformation between the models, which can be computed from three-point correspondences in the general case and from two-point matches if the gravity direction is known.”); and
deriving a value based on the substantial alignment of the first 3D model with the second 3D model (COHEN; pp. 294-295 § 5.2; “The intersection ratio γjk [‘deriving a value’] between two aligned models mj [‘first 3D model’] and mk [‘second 3D model’] is then defined as the fraction of the sparse 3D points in model mj that lie within a free-space voxel of mk.”); wherein
substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the derived value (COHEN; p. 295, § 5.2; “… no 3D point in a model should violate the free-space of another model. However, this is rarely the case in practice due to noise and outliers in the reconstruction. Thus, we allow a certain amount of intersection [‘substantially aligning’] by setting λ =0.05 [‘based on the derived value’], i.e., less than 5% of all points in a model are allowed to violate the free-space constraint. All configurations containing two models with an intersection ratio of λ or more are discarded … during correspondence search.”).
Regarding claim 210 and claim 212, FRUEH-COHEN disclose the method of claim 199, further comprising:
identifying … first/second … elements of the first/second 3D model/images; and
correlating the first … elements with the second … elements (COHEN; p. 289, § 3: ‘Window Detection’; “… we apply a per-pixel classifier to detect windows in all input images [‘identifying … first/second … elements of the first/second 3D model/images’]. For each image, we employ a façade parsing approach on the rectified images … to obtain the 2D rectangles that most likely correspond to the actual windows seen in the photos. … we use the known camera poses and the sparse 3D scene points to estimate the 3D planes containing the windows. Leveraging the SfM points, we estimate 3D window positions for each image individually. We then detect overlapping 3D windows and compute consensus window positions [‘correlating the first … elements with the second … elements’].”); wherein
augmenting the first 3D model with the second 3D model is based on the correlated … elements (COHEN; FIG. 2; p. 288, § 3; “Given SfM reconstructions of indoors and outdoors together with their input images, we leverage per-pixel semantic classification to detect windows in 3D. These windows are then used to compute a registration between both scenes that maximizes the number of aligned windows while avoiding that the models intersect each other” § 5 ‘Model Alignment’; “” pp. 291-292, § 5 ‘Model Alignment’; “The goal of the alignment procedure is to transform the initially disjoint indoor and outdoor models into a common reference frame. Since traditional feature correspondences are not available, we instead employ window-to-window matches to facilitate the alignment. We utilize the fact that a single window correspondence defines a similarity transformation that registers one indoor against one outdoor model. This allows us to exhaustively evaluate all potential matches rather than having to rely on appearance to establish correspondences.”).
Regarding claim 211 and claim 213, FRUEH-COHEN disclose the method of claim 210 and the method of claim 212, further comprising:
identifying … third … elements [which] comprise elements common to the first … elements and the second … elements; wherein augmenting the first 3D model with the second 3D model is based on the third … elements (COHEN; p. 289, § 3: ‘Model Alignment’; “Given 3D window detections for the indoor and outdoor models, we next register the disjoint models [‘augmenting the first 3D model with the second 3D model’] based on window correspondences. Computing the alignment boils down to finding a similarity transformation between the models, which can be computed from three-point correspondences in the general case and from two-point matches if the gravity direction is known … we exploit the width and height of the 3D window detections to estimate a similarity transformation from a single window correspondence.” p. 290, § 4: ‘Window Detection’; “For each 2D window, we obtain a corresponding 3D window by projecting it onto a 3D plane estimated using the sparse SfM points.” pp. 291-292, § 5: ‘Model Alignment’; “The goal of the alignment procedure is to transform the initially disjoint indoor and outdoor models into a common reference frame. Since traditional feature correspondences are not available, we instead employ window-to-window matches [‘identifying … third … elements [which] comprise elements common to the first/second … elements’] to facilitate the alignment. We use the fact that a single window correspondence defines a similarity transformation that registers one indoor against one outdoor model.”)
PNG
media_image3.png
493
1037
media_image3.png
Greyscale
Regarding claim 214, FRUEH-COHEN disclose the method of claim 199, further comprising:
identifying … first … elements of the first 3D model (COHEN; FIG. 2; p. 288; [The Examiner notes that in the ‘Outdoors’ lower-left, blue section of FIG. 2, the exterior windows are classified/identified in blue within the green ‘Detection’ section.]); and
identifying … second … elements of the second 3D model (COHEN; FIG. 2; p. 288, [The Examiner notes that in the ‘Indoors’ upper-left, red section of FIG. 2, the interior windows are classified/identified in blue within the green ‘Detection’ section.]); wherein
augmenting the first 3D model with the second 3D model further comprises
correlating the first 3D model with the second 3D model (COHEN; FIG. 2; p. 288; § 3; “Given separate indoor and outdoor models, we propose to align the inside and outside of a building through semantic information. … as windows are visible both from inside and outside, we use window detections to generate correspondences between the two models, which are then used to compute the alignment between the models.” [The Examiner notes that in the yellow section, the models are aligned/correlated.]), [which] … comprises
assigning a confidence value to each element of the first … elements and the second … elements (COHEN; pp. 291-292; § 5; “The goal of the alignment procedure is to transform the initially disjoint indoor and outdoor models into a common reference frame. Since traditional feature correspondences are not available, we instead employ window-to-window matches to facilitate the alignment. We utilize the fact that a single window correspondence defines a similarity transformation that registers one indoor against one outdoor model. This allows us to exhaustively evaluate all potential matches rather than having to rely on appearance to establish correspondences. This is important since the appearance of a window can change quite drastically between indoors and outdoors or might even be completely different, e.g., due to closed shutters or partial occlusion. A natural way to define the best alignment is to find the transformation that explains the largest number of window correspondences. However, the transformation maximizing the number of inlier matches is not necessarily plausible. For example, it does not guarantee that an indoor model does not protrude from the outside of the building. In this section, we introduce and discuss a quality metric [‘assigning a confidence value’] that takes both the number of inliers and the intersection between the models into account.”).
Regarding claim 215, FRUEH-COHEN disclose the method of claim 199, wherein augmenting the first 3D model with the second 3D model further comprises offsetting the first 3D model from the second 3D model (FRUEH; Col. 5, Lines 58-63; “… the endpoints of edge 312 A, and with them all other edges connected at those endpoints, are moved until the consensus score of all affected edges combined is maximal. As shown by the arrow in FIG. 3C this results in a shifting [‘offsetting’] of the ground model relative to the airborne model such that the edges 312A, 314A are better aligned.”).
Regarding claim 216, FRUEH-COHEN disclose the method of claim 199, further comprising:
deriving a scaling factor based on (COHEN; FIG. 1; p. 286, second-to-last paragraph; “… we propose an alignment algorithm that exploits scene semantics to establish correspondences between indoor and outdoor models. … we exploit the fact that the windows of a building can be seen both from the inside and the outside. Towards this goal, we apply semantic classifiers to detect windows in the indoor and outdoor scenes. A single match between an indoor and outdoor window determines an alignment hypothesis (scale, rotation, translation) between the two models. All hypotheses are inspected and grossly wrong alignments are detected and discarded using a measure of intersection of the two models. Plausible alignments are then further refined using additional window matches. Our approach is robust to noisy window detections and is able to align disconnected indoor and outdoor models. … our method can handle both multiple and/or incomplete indoor or outdoor models.”); and
scaling (COHEN; p. 293, § 5.1; “For correspondence search, we exhaustively explore all possible configurations Ci. We only consider window-to-window matches between indoor and outdoor models. We start by generating all unique pairwise window combinations between every unique pair of indoor and outdoor models. This initial set of combinations determines alignments between pairs of models. … a single window correspondence provides us with redundant observations for both the scale and z-translation estimation. To estimate the scale, we can use either the vertical or horizontal length of the window frames.”)
Regarding claim 219, FRUEH-COHEN disclose the method of claim 199, wherein the common coordinate system is generated based on aligning one or more axes of a coordinate system of the first 3D model to one or more axes of a second coordinate system of the second 3D model (COHEN; p. 290, §. 4: ‘Natural Frame Estimation’; “To simplify the subsequent steps of our procedure, we align each 3D model into a canonical coordinate system. We choose the coordinate system that is aligned to the façade directions of the building. … we determine the main axes of each model by estimating the vanishing points in each input image. The vanishing points then vote for the three coordinate directions. … we align the coordinate system of each 3D model with the xyz-axes, such that the vertical axis is aligned with z and walls are mostly aligned with the x or y direction under a Manhattan world assumption.”).
Claim 202 is rejected under 35 U.S.C. 103 as being unpatentable over FRUEH in view of COHEN as applied to claim 199 above, and further in view of Pearson et al. (U.S. Patent 11,069,145; 'PEARSON').
Regarding claim 202, FRUEH-COHEN disclose the method of claim 199; however, FRUEH-COHEN do not explicitly disclose the following limitation(s), which PEARSON discloses:
generating the first/second outline of the first/second 3D model is based on a top-down view of the first/second 3D model (PEARSON; FIGS. 12-13; Col. 17, Lines 29-67; “… there may be a [UI] element or view toggle that can be selected … to bring up a dollhouse view of the property. … the view toggle 1140 of FIG. 11 may be selected in order to change the presented view of the property to a dollhouse view of the property, such as the dollhouse view 1220 … of the property [which] may show a virtualized 3D model of the property in which individual layers of the 3D model can be removed or rendered transparent to see additional information and/or in which layers of the 3D model can be separated from other layers of the 3D model [‘first/second outline of the first/second 3D model’]. … a roof layer of the 3D model [‘first outline of the first 3D model’] can be removed or toggled off to show a more detailed 3D view of the top floor of the property. Similarly, … layer(s) (e.g., a layer corresponding to an individual room or an individual floor in the property [‘second outline of the second 3D model’]) of the 3D model can be separated from the other layers of the 3D model so that a user can interact solely with the separated layer(s). … there may be a [UI] element or view toggle 1240 that can be selected in order to switch the view back as it was before. … there may be a feature to look at a view that splits the virtual 3D model of the property into sections of a 3D floor plan and/or into … 3D room(s). The different sections and/or rooms may be selected and individually viewed and/or manipulated by the user. FIGS. 13-15 show views in which the virtual 3D model of the property has been split up into sections. Data for the example views illustrated in FIGS. 13-15 can be generated by the AR modeling system 100, and the example views can be rendered and displayed by the application 132 of the user device 130. … FIG. 13 shows the view 1310 of a virtual 3D model of the property that has been split into different numbered sections (e.g., numbered 1-5) [‘top-down view of the first/second 3D model’]. Selecting one of the sections may re-center the view on that particular section and enable a user to explore that particular section of the property.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 199 of FRUEH-COHEN to include the disclosure that generating the first/second outline of the first/second 3D model is based on a top-down view of the first/second 3D model of PEARSON. The motivation for this modification could have been to provide systems and methods for the generation and interactive display of 3D building models, for the determination and simulation of elements associated with the underlying buildings, for the prediction of changes to the underlying buildings, for the generation and display of updated 3D building models factoring in the predicted changes, and for the collection and display of relevant contextual information associated with the underlying building while presenting a 3D building model, which can be displayed in an augmented reality (AR) view, and a user can interact with the 3D building model via controls present in the AR view and/or by moving within the real-world location in which the user is present (PEARSON; Abstract).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M COFINO whose telephone number is (303) 297-4268. The examiner can normally be reached Monday-Friday 10A-4P MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN M COFINO/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614