Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
Claims 1-3, 5-8, 10-17 and 19-23 are pending. Claim 1, 15 and 20 have been amended.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-3, 5-8, 10-17 and 19-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-3, 5-8, 11-17 and 19-23 are rejected under 35 U.S.C. 103 as being unpatentable over Bell et al. (U.S. Publication No. 2016/0055268) (hereafter, "Bell") in view of Kulshreshtha et al. (U.S. Publication No. 2024/0062345) (hereafter, "Kulshreshtha").
Regarding claim 1, Bell teaches a computer-implemented method, comprising ([0109] While the subject matter has been described above in the general context of computer-executable instructions of a computer program): receiving, via a computing device, live camera feed data depicting a defined space ([0029] 3D sensors can be implemented on a camera to capture (e.g., simultaneously capture) texture data and geometric data associated with the interior environment. In another embodiment, the one or more 3D sensors can be implemented on a mobile device (e.g., a smartphone, etc.) to capture texture data and geometric data associated with the interior environment), the defined space being defined by interior surfaces including at least one wall ([0025] flat surfaces (e.g., walls, floors and/or ceilings) and/or objects (e.g., physical objects) associated with 3D data can be identified and/or segmented; [0092] At 1102, three-dimensional (3D) data associated with a 3D model of an architectural environment is received (e.g., by an identification component 104). At 1104, portions of the 3D data associated with flat surfaces are identified; [0049] The first identification component 202 can identify surfaces (e.g., flat surfaces and/or non-flat surfaces) associated with captured 3D data. In an aspect, the first identification component 202 can identify flat surfaces (e.g., walls, floors and/or ceilings) in captured 3D data); extracting three-dimensional geometric scan data and first texture data for the defined space ([0029] the one or more 3D sensors can be implemented on a camera to capture (e.g., simultaneously capture) texture data and geometric data associated with the interior environment); [0030] A 3D model of an interior environment (e.g., the captured 3D data) can comprise geometric data and/or texture data) from the live camera feed data ([0029] a camera to capture (e.g., simultaneously capture) texture data and geometric data associated with the interior environment … a mobile device (e.g., a smartphone, etc.) to capture texture data and geometric data associated with the interior environment); detecting an object occluding a first one of the interior surfaces of the defined space based on performing object recognition ([0059] The second identification component 204 can identify one or more objects attached to an identified surface (e.g., a surface that is identified by the first identification component 202) … the second identification component 204 can identify flat objects (e.g., posters, rugs, etc.) and/or other objects (e.g., paintings, furniture) connected to an identified flat surface (e.g., a wall, a floor or a ceiling); [0035] Captured 3D data for an occluded area can be missing when an area on a surface (e.g., a wall, a floor, a ceiling, or another surface) is occluded by an object (e.g., furniture, items on a wall, etc.); [0087] the object 612 can be identified by the identification component 104 (e.g., the identification component 202, the identification component 204 and/or the identification component 206)) in video processing of the live camera feed data ([0029] a camera to capture (e.g., simultaneously capture) texture data and geometric data associated with the interior environment … a mobile device (e.g., a smartphone, etc.) to capture texture data and geometric data associated with the interior environment; [0102] Input devices 1536 include ... digital video camera); in response to detecting the object: identifying a corresponding portion of the first interior surface occluded by the detected object ([0036] the data generation component 106 can determine that the edge of the surface that borders the edge of the object comprises at least a portion of missing data (e.g., a hole) ... the data generation component 106 can identify an edge of a surface associated with a rear portion of an occlusion boundary during a 3D capture process … the data generation component 106 can determine whether an object or another surface blocked a portion of a particular surface during a 3D capture process. Furthermore, the data generation component 106 can determine that an edge of the particular surface that was occluded during the 3D capture process is an occlusion boundary) … and estimating texture data for the corresponding portion of the first interior surface based on the first texture data for the defined space ([0035] the data generation component 106 can predict and/or generate data (e.g., geometry data and/or texture data) for a particular surface when an object on the particular surface is moved or removed. The one or more hole-filling techniques can be implemented to generate geometry data and/or texture data for an occluded area); [0027] The data generation feature can identify missing data associated with the portion of the captured 3D data and generate additional 3D data for the missing data based on other data associated with the portion of the captured 3D data; [0038]; [0039] The data generation component 106 can replicate captured 3D data located nearby missing data (e.g., a hole); [0041] the data generation component 106 can generate texture data for additional 3D data generated for missing data (e.g., a hole) associated with a flat surface and/or a non-flat surface. The data generation component 106 can sample a region surrounding missing data (e.g., a hole)), outputting texture data for the interior surfaces ([0027] The modification feature can modify geometry data and/or texture data for the portion of the captured 3D data; [0092] At 1110, geometry data and/or texture data for one or more of the portions of the 3D data associated with the flat surfaces and/or one or more of the other portions of the 3D data associated with the objects is modified; [0089] In FIG. 7, … the floor portion 602 can include missing data 702 (e.g., a hole 702) ... In FIG. 9 … texture data associated with the floor portion 602 (e.g., texture data surrounding an area related to the missing data 702) can be employed to further “fill in” missing data) based on the first texture data for the defined space and ([0092] At 1102, three-dimensional (3D) data associated with a 3D model of an architectural environment is received ... At 1104, portions of the 3D data associated with flat surfaces are identified; [0030] A 3D model of an interior environment (e.g., the captured 3D data) can comprise … texture data) the estimated texture data for the corresponding portion of the first interior surface ([0092] At 1108, other 3D data for missing data related to the portions of the 3D data associated with the flat surfaces is generated (e.g., by a data generation component 106).
Bell does not expressly teach based on determining an area of overlap between a three-dimensional bounding box encompassing a position of the detected object and the first interior surface.
However, Kulshreshtha teaches … based on determining an area of overlap between a three-dimensional bounding box encompassing a position of the detected object and the first interior surface ([0029] FIG. 23 illustrates an example of determining one or planes behind a foreground object based at least in part on plane masks and a location of the foreground object; [0148] FIG. 23 illustrates an example of determining one or planes behind a foreground object based at least in part on plane masks and a location of the foreground object ... As shown in box 2301, a user has selected a sofa for removal from the scene. The plane masks 2302 are used to determine which planes are behind this sofa, as shown in box 2303. For example, a pixel location of the removed object in the RGB image can be mapped onto the plane mask to determine which plane is at that pixel location (behind the object to be removed). Once the planes behind the sofa are determined, the 3d plane equations can be used in conjunction with the locations of pixels corresponding to the removed object to determine the estimated geometry of the planes behind the object to be removed; [0144] The RGB image 2201 is used to generate semantic map 2202 and depth map 2203 ... The depth map 2203 and the semantic map 2202 are then used to determine plane masks 2204, as shown in FIG. 22. This can be performed, for example, by superimposing semantic map 2202 on depth map 2203 to determine locations of planes within the scene; [0107] The semantic map can include three-dimensional semantic maps … are associated with three dimensional geometry; FIG. 6F shows three-dimensional bounding boxes encompassing the detected objects; [0116] As shown in FIG. 9, the semantic map 902 is superimposed with pixels corresponding to object instances (shown, in this example, in instance map 901) in order to identify foreground objects; [0118] foreground object identification can be based on one or more of depth information (identifying objects at the front of an image relative to a background or background planes/geometry), three dimensional geometry information (analyzing coordinates in three dimensional space)).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the method and device of Bell to incorporate the step/system of identifying the portion (estimated geometry) of the planes behind the foreground object by mapping the pixel locations of the foreground object based on determining the overlapping area between the identified foreground object based on three dimensional geometry information and plane masks by using three-dimensional semantic maps taught by Kulshreshtha.
The suggestion/motivation for doing so would have been to improve the accuracy of views behind foreground objects for enhancements of inpainting ([0004] there is a need for improvements in systems and foreground object deletion and inpainting; [0094] the process can be enhanced when multiple images and multiple viewpoints are provided. When multiple views are provided, the scene/room can be scanned from multiple vantage points, allowing visibility of foreground objects from their sides and more accurate geometry estimation of foreground objects, as well as views behind foreground objects. The additional data derived from additional viewpoints and images also allows for improvements in nesting of objects and deletion of objects). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Bell with Kulshreshtha to obtain the invention as specified in claim 1.
Regarding claim 2, the combination of Bell and Kulshreshtha teaches all the limitations of claim 1 above. Bell teaches further comprising outputting a model of the defined space ([0024] Digital three-dimensional (3D) models can be generated based on scans of architectural spaces (e.g., houses, construction sites, office spaces, etc); [0045] The modification component 108 can generate an updated 3D model in response to modifying one or more surfaces and/or one or more objects in a 3D model), the model including the texture data for the interior surfaces ([0030] A 3D model of an interior environment (e.g., the captured 3D data) can comprise geometric data and/or texture data; [0044] Each flat surface, non-flat surface and/or object identified by the identification component 104 can be uniquely modified by the modification component 108 ... Geometry data, texture data and/or other data for a surface can be modified by the modification component 108).
Regarding claim 3, the combination of Bell and Kulshreshtha teaches all the limitations of claim 2 above. Bell teaches wherein outputting the model of the defined space ([0024] Digital three-dimensional (3D) models can be generated based on scans of architectural spaces (e.g., houses, construction sites, office spaces, etc)) comprises presenting, in a display device, display data representing the texture data for the interior surfaces ([0084] The user input component 402 can employ and/or be associated with a graphical user interface to allow a user to interact with 3D data. The graphical user interface can comprise a 3D rendering of 3D data (e.g. a 3D model and/or a set of identified surfaces) and/or a navigation interface; [0025] geometry data and/or texture data for identified flat surfaces and/or objects can be modified based on data received via a user interface).
Regarding claim 5, the combination of Bell and Kulshreshtha teaches all the limitations of claim 1 above. Bell teaches wherein the defined space is an interior space ([0031] An interior environment (e.g., an indoor environment, an interior architectural environment, etc.) can include, but is not limited to, one or more rooms, one or more houses, one or more apartments, one or more office spaces).
Regarding claim 6, the combination of Bell and Kulshreshtha teaches all the limitations of claim 5 above. Bell teaches wherein the interior space is a room ([0031] An interior environment (e.g., an indoor environment, an interior architectural environment, etc.) can include, but is not limited to, one or more rooms, one or more houses, one or more apartments, one or more office spaces).
Regarding claim 7, the combination of Bell and Kulshreshtha teaches all the limitations of claim 1 above. Bell teaches wherein the live camera feed data comprises ([0029] the one or more 3D sensors can be implemented on a camera to capture (e.g., simultaneously capture) texture data and geometric data associated with the interior environment; [0030] A 3D model of an interior environment (e.g., the captured 3D data) can comprise geometric data and/or texture data) at least one of camera data or LiDAR scanner data ([0025] A 3D reconstruction system can employ 2D image data and/or depth data captured from 3D sensors (e.g., laser scanners, structured light systems, time-of-flight systems, etc.) to generate the 3D data (e.g., the 3D-reconstructed data); [0029] the one or more 3D sensors can be implemented on a camera to capture (e.g., simultaneously capture) texture data and geometric data associated with the interior environment).
Regarding claim 8, the combination of Bell and Kulshreshtha teaches all the limitations of claim 1 above. Bell teaches wherein identifying the corresponding portion of the first interior surface comprises ([0036] the data generation component 106 can determine that the edge of the surface that borders the edge of the object comprises at least a portion of missing data (e.g., a hole) ... the data generation component 106 can identify an edge of a surface associated with a rear portion of an occlusion boundary during a 3D capture process (e.g., when the captured 3D data is captured by one or more 3D sensors). For example, the data generation component 106 can determine whether an object or another surface blocked a portion of a particular surface during a 3D capture process. Furthermore, the data generation component 106 can determine that an edge of the particular surface that was occluded during the 3D capture process is an occlusion boundary) determining a three-dimensional occlusion area ([0037] the data generation component 106 can generate additional 3D data (e.g., triangle mesh data) for missing data (e.g., a hole) associated with a particular surface in response to a determination of edges corresponding to a boundary associated with missing data (e.g., a hole boundary); [0089] the floor portion 602 can include missing data 702 (e.g., a hole 702). The missing data 702 can include uneven (e.g., irregular, jagged, etc.) edges due to a triangle mesh boundary) associated with the detected object ([0036] the data generation component 106 can determine whether an object or another surface blocked a portion of a particular surface during a 3D capture process; [0035] Captured 3D data for an occluded area can be missing when an area on a surface (e.g., a wall, a floor, a ceiling, or another surface) is occluded by an object (e.g., furniture, items on a wall, etc.)).
Regarding claim 11, the combination Bell and Kulshreshtha teaches all the limitations of claim 1 above. Bell teaches wherein estimating the texture data for the corresponding portion of the first interior surface comprises ([0035] the data generation component 106 can predict and/or generate data (e.g., geometry data and/or texture data) for a particular surface when an object on the particular surface is moved or removed. The one or more hole-filling techniques can be implemented to generate geometry data and/or texture data for an occluded area); [0027] The data generation feature can identify missing data associated with the portion of the captured 3D data and generate additional 3D data for the missing data based on other data associated with the portion of the captured 3D data; [0038]; [0039] The data generation component 106 can replicate captured 3D data located nearby missing data (e.g., a hole); [0041] the data generation component 106 can generate texture data for additional 3D data generated for missing data (e.g., a hole) associated with a flat surface and/or a non-flat surface. The data generation component 106 can sample a region surrounding missing data (e.g., a hole)).
Bell does not expressly teach obtaining second texture data for portions of the first interior surface that are not occluded by the detected object; and estimating the texture data for the corresponding portion using an inpainting technique based on the second texture data.
However, Kulshreshtha teaches obtaining second texture data for portions of the first interior surface that are not occluded by the detected object ([0193] the texture regions can further indicate different texture regions on each plane and this information can be used to determine both the planes that are occluded by the object and the specific texture regions that are occluded by the object; FIG. 32 shows Texture Region (plane 1) and Texture Region (Plane 2) are not occluded by the object); and estimating the texture data for the corresponding portion using an inpainting technique based on the second texture data ([0194] Having identified the relevant occluded planes (and optionally the relevant occluded texture regions), textures are extracted from each of the planes (and optionally from each of the texture regions) for use in the infill/inpainting process. The textures can be extracted from portions of the plane (and optionally the texture region) that are not occluded (as indicated by the contextual information); [0195] The extracted texture regions are then used to inpaint the portions of planes behind the removed object (i.e., the portions of the planes occluded by the object), as shown in box 3202).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the method and device of the combination Bell to incorporate the step/system of extracting different texture information from texture regions on each plane which are not occluded by the object and generating the textures for occluded texture regions by using an inpainting process based on the different texture information taught by Kulshreshtha.
Motivation for this combination has been stated in claim 1.
Regarding claim 12, the combination Bell and Kulshreshtha teaches all the limitations of claim 11 above. Bell teaches wherein estimating the texture data for the corresponding portion comprises performing pattern recognition for identifying primary patterns associated with the second texture data ([0045] the modification component 108 can identify one or more portions of a surface and/or an object that comprises a consistent appearance or patterned texture to facilitate modifying a surface and/or an object … the modification component 108 can employ one or more appearance/illumination disambiguation algorithms to identify one or more portions of a surface and/or an object that comprise a consistent appearance or patterned texture. One or more portions of a surface and/or an object that comprise a consistent appearance or patterned texture can be employed as a set of areas for an appearance modification and/or a texture modification).
Regarding claim 13, the combination of Bell and Kulshreshtha teaches all the limitations of claim 1 above. Bell teaches wherein detecting the object comprises determining that the object is positioned in spaced relation to the occluded interior surface ([0034] the identification component 104 can segment and/or identify objects (e.g., physical objects) associated with the captured 3D data (e.g., mesh data) … object can be associated with (e.g., connected to) one or more flat surfaces. As such, the identification component 104 can segment and/or identify objects (e.g., another portion of the captured 3D data associated with an object) based on proximity data in relation to one or more flat surfaces and/or texture data in association with one or more flat surfaces); [0035] Captured 3D data for an occluded area can be missing when an area on a surface (e.g., a wall, a floor, a ceiling, or another surface) is occluded by an object (e.g., furniture, items on a wall, etc.)).
Regarding claim 14, the combination Bell and Kulshreshtha teaches all the limitations of claim 1 above. Kulshreshtha teaches wherein detecting the object comprises performing object recognition based on the three-dimensional geometric scan data using a trained machine learning model ([0118] the step of foreground object identification can be performed in other ways and using other forms of contextual information. For example, foreground object identification can be based on one or more of depth information (identifying objects at the front of an image relative to a background or background planes/geometry), three dimensional geometry information (analyzing coordinates in three dimensional space), instance segmentation, pattern/image recognition (recognizing furniture or other objects with or without a neural network); [0097] the system can store a three dimensional geometric model corresponding to the scene or a portion of the scene; [0298] Any 3D representation of the capture scene (e.g. point clouds, depth-maps, meshes, voxels, where 3D points are assigned their respective texture/color) can also be used; [0299] The captures are used to obtain perceptual quantities/contextual information, aligned to one or more of the input images).
Regarding claim 21, the combination of Bell and Kulshreshtha teaches all the limitations of claim 1 above. Bell teaches further comprising providing, via an augmented reality (AR) device, an AR scene of the defined space ([0025] Semantic understanding of 3D data (e.g., 3D-reconstructed data) generated based on a 3D reconstruction system can facilitate automatic and/or semi-automatic generation of 3D models of real-world locations (e.g., houses, apartments, construction sites, office spaces, commercial spaces, other living spaces, other working spaces, etc.) ... geometry data and/or texture data for identified flat surfaces and/or objects can be modified based on data received via a user interface (e.g., a user interface implemented on a remote client device). The identification, segmentation, augmentation and/or modification of 3D data can facilitate generation of a 3D model and/or a floorplan) with the detected object removed ([0059] identified flat objects and/or other identified objects can be removed (e.g., deleted, moved, etc.) from captured 3D data) based on combining the live camera feed data and the outputted texture data for the interior surfaces ([0092] At 1102, three-dimensional (3D) data associated with a 3D model of an architectural environment is received (e.g., by an identification component 104). At 1104, portions of the 3D data associated with flat surfaces are identified (e.g., by a first identification component 202). At 1106, other portions of the 3D data associated with objects are identified (e.g., by a second identification component 204). At 1108, other 3D data for missing data related to the portions of the 3D data associated with the flat surfaces is generated (e.g., by a data generation component 106). At 1110, geometry data and/or texture data for one or more of the portions of the 3D data associated with the flat surfaces and/or one or more of the other portions of the 3D data associated with the objects is modified (e.g., by a modification component 108)).
Regarding claim 22, the combination Bell and Kulshreshtha teaches all the limitations of claim 21 above. Kulshreshtha teaches further comprising causing to be displayed, in the AR scene of the defined space ([0074] The disclosed methods and techniques are described in the context of an interior design system. Mixed reality technologies are proving to be promising ways to help people reimagine rooms with new furnishings; [0075]), at least one replacement object different from the detected object ([0091] FIG. 3B illustrates an example of the process of removing and adding new furniture to a scene according to an exemplary embodiment. The furniture in the original image 303 of the scene is removed and new furniture is inserted into the scene, as shown in 304).
Regarding claim 23, the combination of Bell and Kulshreshtha teaches all the limitations of claim 21 above. Bell teaches wherein estimating the texture data for the corresponding portion of the first interior surface comprises ([0035] the data generation component 106 can predict and/or generate data (e.g., geometry data and/or texture data) for a particular surface when an object on the particular surface is moved or removed. The one or more hole-filling techniques can be implemented to generate geometry data and/or texture data for an occluded area); [0027] The data generation feature can identify missing data associated with the portion of the captured 3D data and generate additional 3D data for the missing data based on other data associated with the portion of the captured 3D data; [0038]; [0039] The data generation component 106 can replicate captured 3D data located nearby missing data (e.g., a hole); [0041] the data generation component 106 can generate texture data for additional 3D data generated for missing data (e.g., a hole) associated with a flat surface and/or a non-flat surface. The data generation component 106 can sample a region surrounding missing data (e.g., a hole)) using depth information for the AR scene to determine surface texture at a same depth as the first interior surface ([0043] a hole associated with a hole boundary that is within a certain distance from being planar can be filled ... texture data from an area around the hole can be blended according to a weighted average based on distance to provide visual data for the hole (e.g., missing data included in the hole)).
With respect to claim 15, arguments analogous to those presented for claim 1, are applicable.
With respect to claim 16, arguments analogous to those presented for claim 2, are applicable.
With respect to claim 17, arguments analogous to those presented for claim 3, are applicable.
With respect to claim 19, arguments analogous to those presented for claim 11, are applicable.
With respect to claim 20, arguments analogous to those presented for claim 1, are applicable.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Bell et al. (U.S. Publication No. 2016/0055268) (hereafter, "Bell") in view of Kulshreshtha et al. (U.S. Publication No. 2024/0062345) (hereafter, "Kulshreshtha") in further in view of CHOI et al. (U.S. Publication No. 2020/0143557) (hereafter, "CHOI").
Regarding claim 10, the combination Bell and Kulshreshtha teaches all the limitations of claim 1 above. The combination Bell and Kulshreshtha does not expressly teach wherein the three-dimensional bounding box is represented using geometrical coordinates associated with boundaries of the three-dimensional bounding box.
However, CHOI teaches wherein the three-dimensional bounding box is represented using geometrical coordinates associated with boundaries of the three-dimensional bounding box ([0029] The 3D bounding box may be a rectangular parallelepiped … the 3D bounding box may be specified using the coordinates of eight corner points. Alternatively, the 3D bounding box may be specified using a combination of locations and sizes. A location may be expressed by the coordinate of a corner point or the coordinate of a center point on a bottom surface, and a size may be expressed by a width, a length, or a height).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the method and device of the combination Bell and Kulshreshtha to incorporate the step/system of representing a 3D bounding box by using the coordinates of eight corner points of rectangular parallelepiped of the 3D bounding box taught by CHOI.
The suggestion/motivation for doing so would have been to improve the accuracy of detecting an object in a 3D coordinate system ([0032] Embodiments below describe techniques for accurately detecting a volume in a 3D coordinate system even if an object is partially hidden or cut off in a 2D image by iteratively searching for candidates for the direction of the volume in a 3D coordinate system based on projective geometry). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Bell and Kulshreshtha with CHOI to obtain the invention as specified in claim 10.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL C. CHANG whose telephone number is (571)270-1277. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan S. Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL C CHANG/ Examiner, Art Unit 2669
/CHAN S PARK/ Supervisory Patent Examiner, Art Unit 2669