DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, filed 02/11/2026, with respect to the claims 1-5 and 7-18 have been fully considered but are moot because the arguments do not apply to the current references/combination of references being used in the current rejection.
Claim Objections
Claim 1 is objected to because of the following informalities:
At line 10, the terms “determining dimensions of the detected object_;” should be changed to “determining dimensions of the detected object;” to remove the additional space before the semi-colon. It is unclear whether another change was intended given the surrounding language and semi-colon appear to be unchanged from prior claim sets. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5, 7-11 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over AARTS et al. (US 20120183204 A1), hereinafter referenced as AARTS in view of WEXLER (US 20160104288 A1), hereinafter referenced as WEXLER and in further view of FATHI (US 20160364885 A1), hereinafter referenced as FATHI.
Regarding claim 1, AARTS explicitly teaches a method (Fig. 3. Paragraph [0019]-AARTS discloses the present disclosure is directed to systems and methods for rendering one or more two dimensional images into a three dimensional virtual environment, or background, that can be manipulated by arranging three dimensional virtual objects in the three dimensional environment, altering lighting, changing textures and colors, etc, and presenting the altered two dimensional image from a virtual camera viewpoint, and with a virtual camera orientation, that can be interactively changed. In paragraph [0026]-AARTS discloses FIG. 3 illustrates a process performed by the image conversion system 100) implemented with instructions executed by a processor (Fig. 2, #202 called a central processing unit. Paragraph [0022]-AARTS discloses computer 102 comprises a central processing unit (CPU) 202, an input output (I/O) unit 204, a display device 206, a secondary storage device 208, and a memory 210. In paragraph [0023]-AARTS discloses computer 102's memory 210 includes a Graphical User Interface ("GUI") 212 that is used to gather information from a user via the display device 206 and I/O unit 204 as described herein. The GUI 212 includes any user interface capable of being displayed on a display device 206 including, but not limited to, a web page, a display panel in an executable program, or any other interface capable of being displayed on a computer screen. Please also read paragraph [0002] and claim 1), comprising:
processing a single digital image of an interior space (Fig. 3. Paragraph [0027]-AARTS discloses in step 302, an image is captured by an image capturing unit communicatively coupled to a computer 102, 104, or 106. The image may be captured using any conventional image capturing device such as, but not limited to, a digital camera or any other device capable of capturing an image and converting the image into a digital format. The image is transmitted to the image receiving unit 110 operating in the memory of the computer 102 in step 304 using any conventional information transferring method);
determining dimensions of the detected object (Fig. 3. Paragraph [0027]-AARTS discloses in step 308, the image analysis unit 114 determines the physical dimensions of the room in which the image was captured, based on the image information. Please also read paragraph [0036]) ;
applying image segmentation to the single digital image to produce a segmented image (Fig. 6. Paragraph [0028]-AARTS discloses the image analysis unit 114 may identify objects in an image by analyzing the pixels in the image to determine lines where the pixel colors change from one color to another. In paragraph [0036]-AARTS discloses the image analysis unit 114 may also use a line analysis algorithm to identify where the lines that forms the intersections between the walls 402, 404, and 406 in the image. The line analysis algorithm may include a Hough transform algorithm or any other image line analysis algorithm that is known in the art);
detecting edges in the segmented image to produce a combined output image (Fig. 6. Paragraph [0047]-AARTS discloses in step 612, the image analysis unit 114 identifies objects 604 within the removal area 602. The image analysis unit 114 may use any known object identification technique such as edge detection, image matching, or any other known image identification technique) produced from said single digital image (Fig. 4. Paragraph [0026]-AARTS discloses in step 302, an image is captured by an image capturing unit communicatively coupled to a computer 102, 104, or 106. Please also read paragraph [0028-0030 and 0036]);
applying dimensions (Fig. 3. Paragraph [0029]-AARTS discloses in step 316, the image conversion unit 116 converts the image from a two dimensional image into a three dimensional image using the dimensions of the room. Further in paragraph [0042]-AARTS discloses in step 514, a virtual three-dimensional representation of the room in the image 400 is stored in memory 210 of the computer 102 along the optimal vector x, which defines the dimensions of the room, and the information previously gathered in step 508 that defines the appearance of the room) to the geometrically transformed digital image (Fig. 3. Paragraph [0036]-AARTS discloses the image analysis unit 114 may also use a line analysis algorithm to identify where the lines that forms the intersections between the walls 402, 404, and 406 in the image. The line analysis algorithm may include a Hough transform algorithm or any other image line analysis algorithm that is known in the art) at least partially based on the dimensions of the detected object to produce a dimensionalized floorplan (Fig. 3. Paragraph [0038]-AARTS in step 508, the image analysis unit 114 gathers information on each wall 402, 404, and 406. Additionally in paragraph [0039] In step 510, the image analysis unit 114 calculates an initial estimate of the room dimensions and the image capturing device properties based on the basic dimensional information gathered from the image).
AARTS fails to explicitly teach applying geometric transformation, field of view and depth correction to the combined output image to correct for image distortion to produce a geometrically transformed digital image.
However, WEXLER explicitly teaches applying geometric transformation (Fig. 10. Paragraph [0143]-WEXLER discloses the second axis (e.g., x-axis) rotation correction is then calculated (step 316) followed by calculating third axis (e.g., y-axis) rotation (step 318) (wherein the rotations about both axes are a geometric transformation). Please also read paragraph [0133, 0163 and 0182-0183]), field of view (Fig. 10. Paragraph [0142]-WEXLER discloses when the perspective correction is performed for a rectangular primary object in an automated way as shown in FIG. 10. The image is then resampled (step 320). A correction term is computed which estimates the level of perspective distortion based upon the peaks located in the Hough accumulator corresponding to the center area of the window, such as the center area of a window divided into equal divisions of three each in the vertical and horizontal directions (wherein perspective distortion is field of view correction). Barrel and/or pincushion corrections are then calculated and applied (step 324) as described supra. Please also read paragraph [0133, 0163 and 0182-0183]) and depth correction (Fig. 10. Paragraph [0142]-WEXLER discloses if no Hough transformation peaks are detected from the central portion of the primary object (step 308), the z-axis rotation for peripheral near-vertical and near-horizontal lines are computed and z-axis rotation correction is applied (step 310) such that the sum of the average line slopes at opposing edges is minimized. In paragraph [0143]-WEXLER discloses optionally, the method may be repeated on the accumulator peaks of the corrected image 326 beginning with z-axis correction (step 310 or 312) (wherein z-axis rotation correction/correction is depth correction). Further in paragraph [0184]-WEXLER discloses once the distortion corrections have been applied to the image, the rotational relationship of a reference dimension with respect to the corrected image constraint length is used to align a reference dimension to a constraint length to be measured. A depth perspective correction may be applied if the reference dimension is in a plane to the interior or exterior of the plane containing the constraint length to be measured) to the combined output image (Fig. 10. Paragraph [0137]-WEXLER discloses after capture of an image and associated capture metadata, the image undergoes image processing to measure the reference dimensions and allow calculation of lengths for design of supplemental parts. In paragraph [0142]-WEXLER discloses when the perspective correction is performed for a rectangular primary object in an automated way as shown in FIG. 10, the second example method receives a digital image and optional metadata (step 300). The edges in the image are then detected to generate a binary edge map of the image (step 302). A Hough transformation is then computed (step 304) (wherein the combined output image is formed from the binary edge map and/or Hough transformation)) to correct for image distortion to produce a geometrically transformed digital image (Fig. 10. Paragraph [0143]-WEXLER discloses the original inputted digital image is then resampled according to the correction term for removing part or all of the perspective distortion (step 314). Please also read paragraph [0110-0111, 0138, and 0140-0141]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS of having a method implemented with instructions executed by a processor, with the teachings of WEXLER of having applying geometric transformation, field of view and depth correction to the combined output image to correct for image distortion to produce a geometrically transformed digital image.
Wherein having AARTS method having applying geometric transformation, field of view and depth correction to the combined output image to correct for image distortion to produce a geometrically transformed digital image.
The motivation behind the modification would have been to obtain a method that allows for the generation and manipulation of 3D models and improved measurement results, since both AARTS and WEXLER concern systems and methods for generating virtual environments from two dimensional images. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while WEXLER provides systems and methods that improves the alignment and positioning of an image capture device and allows for the processing of a digital image to improve measurement capabilities, including the distances related to an object depicted in an image. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and WEXLER (US 20160104288 A1), Abstract, Paragraph [0051 and 0132] and claims 5, 9 and 13.
Although AARTS explicitly teaches identifying at least one detected object (Fig. 3. Paragraph [0028]-AARTS discloses in step 310, the image analysis unit 114 identifies objects in the image. Please also read paragraph [0055]) within the single digital image (Fig. 3. Paragraph [0027]-AARTS discloses in step 302, an image is captured by an image capturing unit communicatively coupled to a computer 102, 104, or 106. The image may be captured using any conventional image capturing device such as, but not limited to, a digital camera or any other device capable of capturing an image and converting the image into a digital format);
AARTS in view of WEXLER fails to explicitly teach wherein said at least one detected object comprises a standard construction object having standard known dimensions, wherein said standard construction object is an electrical wall plate selected from the group consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate; wherein said standard construction object is an electrical wall plate selected from the group consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate.
However, FATHI explicitly teaches identifying at least one detected object within the single digital image (Fig. 1B. Paragraph [0067]-FATHI discloses the present invention provides accurate 3D digital representations of at least one object in a scene. In paragraph [0079]-FATHI discloses the 3D digital representations of the at least one object of interest are derived from the plurality of overlapping 2D images. In paragraph [0082]-FATHI discloses measurements of the at least one object of interest can be obtained without use of, or in addition to, a marker (wherein a marker is a ruler or other standard sized object). In this regard, the invention utilizes an internal or “intrinsic” reference. In paragraph [0083]-FATHI discloses with regard to an intrinsic reference derived from the focal length of the image-capture device most existing image-capture devices (e.g., cameras) comprise a short depth of field, resulting in images which appear focused only on a small 3D slice of the scene. Please also see Fig. 3 and read paragraph [0081-0085]), wherein said at least one detected object comprises a standard construction object having standard known dimensions (Fig. 1B. Paragraph [0084]-FATHI discloses in a further aspect of the intrinsic reference feature of the present invention, a library of standard object identities and sizes can be included in the software associated with the image-capture device to provide data from which measurement data for the at least one object of interest can be derived. Please also see Fig. 3 and read paragraph [0081-0082 and 0085]), wherein said standard construction object is an electrical wall plate selected from the group consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate (Fig. 3. Paragraph [0084]-FATHI discloses for example, if a single toggle light switchplate, which has a standard US size of 4.5 inches (11.42 cm) in height and 2.75 inches (6.985 cm) in width, appears in a scene with an object of interest, the known standard dimensions of this switchplate can be used as an intrinsic reference to provide a point of reference from which the dimensions of the object of interest can be derived. In paragraph [0085]-FATHI discloses if the identified reference object that will serve as the intrinsic reference for providing measurement of an object of interest present in the scene is a switchplate cover, the system 101 will elicit and receive the specification of an object to be used for dimensional calibration, and the user will select the switchplate cover to serve as the intrinsic reference. Therefore, it would have been obvious to one of ordinary skill in the art to use a group of standard construction objects consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate. The types of electrical wall-plates listed represent the most common and well-known categories of wall plates. Moreover, electrical wall plates are ubiquitous construction objects that are often in close proximity to other more important target objects. They also offer highly consistent and/or standardized dimensions. Thus, it would be obvious to use this specific group of electrical wall plates given they are strong potential candidates for a reference object. This would be an obvious extension of the functionality of FATHI and improve the accuracy and consistency of object detection, identification and measurement);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER of having a method implemented with instructions executed by a processor, with the teachings of FATHI of having identifying at least one detected object within the single digital image, wherein said at least one detected object comprises a standard construction object having standard known dimensions, wherein said standard construction object is an electrical wall plate selected from the group consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate; wherein said standard construction object is an electrical wall plate selected from the group consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate.
Wherein having AARTS method having identifying at least one detected object within the single digital image, wherein said at least one detected object comprises a standard construction object having standard known dimensions, wherein said standard construction object is an electrical wall plate selected from the group consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate; wherein said standard construction object is an electrical wall plate selected from the group consisting of an electrical outlet wall plate, a toggle switch wall plate, and a paddle switch wall plate.
The motivation behind the modification would have been to obtain a method that allows for the generation and manipulation of 3D models and improved measurement results, since both AARTS and FATHI concern systems and methods for generating virtual environments from two dimensional images. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while FATHI provides systems and methods for an object-oriented building-development tool that utilizes modeling concepts, information technology and software interoperability to improve photogrammetry and reduce the amount of data needed to produce more accurate 3D digital representations. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and FATHI et al. (US 20160364885 A1), Abstract, Paragraph [0010, 0030 and 0047].
Regarding claim 2, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS further teaches wherein the dimensionalized floorplan is a three-dimensional construction image (Fig. 3. Paragraph [0029]-AARTS discloses in step 316, the image conversion unit 116 converts the image from a two dimensional image into a three dimensional image using the dimensions of the room. The image conversion unit 116 generates a three dimensional plane for each wall of the room, and stores these planes in the memory 210. In addition, the image conversion unit 116 converts each object in the room into a three dimensional object by relating the dimensions of each object to the dimensions of the room and the position of each object within the room. Further in paragraph [0042]-AARTS discloses in step 514, a virtual three-dimensional representation of the room in the image 400 is stored in memory 210 of the computer 102 along the optimal vector x, which defines the dimensions of the room, and the information previously gathered in step 508 that defines the appearance of the room).
Regarding claim 5, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS further teaches wherein determining dimensions of the detected object is based upon a reference database (Fig. 6. Paragraph [0028] In step 310, the image analysis unit 114 identifies objects in the image. The image analysis unit 114 may identify objects in an image by analyzing the pixels in the image to determine lines where the pixel colors change from one color to another. The image analysis unit 114 may also identify objects by comparing areas identified in the image to a database of known images. Further in paragraph [0032]-AARTS discloses in step 322, the GUI 232 on the user device 104 or 106 displays a list of objects to insert into the converted image from the object storage unit 210 in the secondary storage 208 of the computer 102. The objects in the object storage unit 210 include information concerning each object listed including, but not limited to, the dimensions of the object, the color of the surfaces of the object, the composition of each surface and the reflective characteristics of each surface).
Regarding claim 7, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS fails to explicitly teach wherein determining dimensions of the detected object is based upon identifying the detected object using a brand, serial number, model number or combinations thereof.
However, WEXLER explicitly teaches wherein determining dimensions of the detected object is based upon identifying the detected object using a brand, serial number, model number or combinations thereof (Fig. 3. Paragraph [0118]-WEXLER discloses the present invention provides reference dimension measurement using a reference object, optionally having another use when not used in the present invention, or may be a standard size reference object. Prior to capturing the digital image, the end user may place a standard sized object on the window frame, sill, stool, sash, windowpane, next to the window or within the window frame being photographed, as shown in FIG. 3. Standard sized objects should have an easily identified linear dimension that is viewable in the image. More than one standard sized object may be used in an image. Further in paragraph [0119]-WEXLER discloses the reference object may also be a thin electronic display device such as a tablet or laptop computer display or a cell phone display for which the make and model is known and conveyed to the service provider as metadata. Alternatively, a standard object or figure provided by the service provider may be used, printed or displayed electronically whereby the service provider predetermines the dimensions of the standard object or printed figure).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER and in further view of FATHI of having a method implemented with instructions executed by a processor, with the teachings of WEXLER of having wherein determining dimensions of the detected object is based upon identifying the detected object using a brand, serial number, model number or combinations thereof.
Wherein having AARTS method having wherein determining dimensions of the detected object is based upon identifying the detected object using a brand, serial number, model number or combinations thereof.
The motivation behind the modification would have been to obtain a method that allows for the generation and manipulation of 3D models and improved measurement results, since both AARTS and WEXLER concern systems and methods for generating virtual environments from two dimensional images. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while WEXLER provides systems and methods that improves the alignment and positioning of an image capture device and allows for the processing of a digital image to improve measurement capabilities, including the distances related to an object depicted in an image. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and WEXLER (US 20160104288 A1), Abstract, Paragraph [0051 and 0132] and claims 5, 9 and 13.
Regarding claim 8, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS further teaches wherein the geometrically transformed digital image (Fig. 3. Paragraph [0029]-AARTS discloses in step 316, the image conversion unit 116 converts the image from a two dimensional image into a three dimensional image using the dimensions of the room. In paragraph [0036]-AARTS discloses FIG. 5 illustrates a process of determining the dimensions of a room from an image 400. In step 504, the information gathering unit 114 receives basic dimensional information of the image via the GUI 232, and from the information stored in the image such as EXIF information. The image analysis unit 114 may also use a line analysis algorithm to identify where the lines that forms the intersections between the walls 402, 404, and 406 in the image (wherein the line analysis algorithm may include a Hough transform algorithm). Please also see Fig. 4-5) is a pixel image (Fig. 5. Paragraph [0037]-AARTS in step 506, the image analysis unit 114 identifies the walls 402, 404, and 406 displayed in the image 400 based on the information gathered from the information gathering unit 112. The walls 402, 404, and 406 are identified in the image as the pixels in the captured image contained in the non-self-intersecting polygons formed by pairs of neighboring lines which form the intersection between walls. In paragraph [0038]-AARTS discloses in step 508, the image analysis unit 114 gathers information on each wall 402, 404, and 406. To gather this information, the image analysis unit 114 systematically analyzes the pixels in each wall to determine the colors of each wall, and the relative location of each color on each wall).
Regarding claim 9, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS further teaches further comprising: positioning a bounding box around the detected object (Fig. 6A, # 602 called a removal area. Paragraph [0045]); and using the bounding box (Fig. 6A, # 602 called a removal area. Paragraph [0045]) and the combined output image (Fig. 6A, #400 called an image. Paragraph [0045]. Please also read paragraph [0036, 0038 and 0041]) to determine a digital perimeter of the detected object (Fig. 6A. Paragraph [0047]-Aarts discloses in step 612, the image analysis unit 114 identifies objects 604 within the removal area 602. The image analysis unit 114 may use any known object identification technique such as edge detection, image matching, or any other known image identification technique).
Regarding claim 10, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 9, AARTS further teaches further comprising using a geometrical correction technique in determining an adjusted digital perimeter of the detected object (Fig. 6A. Paragraph [0046]-Aarts discloses in step 610, the image analysis unit 114 identifies at least one sample area 604 for the identified removal area 602. The sample area 606 may be identified using the same techniques as identifying the removal area 602. Further in paragraph [0048]-Aarts discloses in step 614, the image analysis unit 114 divides the removal area 602 into target patches. The target patches may be of the same size and shape. Each of the target patches represents a portion of the removal area 602 where the pixel information in that area is removed and replaced by the pixel information from the sample area 606. Additionally, in paragraph [0051]-Aarts discloses single linear gradient of random size, and of random orientation, is applied to each potential sample patch. Subsequently, each potential sample patch is multiplied with an intensity correction factor. Please also read paragraph [0047]).
Regarding claim 11, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 10, AARTS further teaches wherein the geometrical correction techniques compare the determined digital perimeter of the detected object to dimensions or geometric properties of the detected object to calculate an angular offset therebetween to determine the adjusted digital perimeter (Fig. 6A. Paragraph [0047]-AARTS discloses in step 612, the image analysis unit 114 identifies objects 604 within the removal area 602. The image analysis unit 114 may use any known object identification technique such as edge detection, image matching, or any other known image identification technique. Further in paragraph [0048]-AARTS discloses in step 614, the image analysis unit 114 divides the removal area 602 into target patches. Additionally in paragraph [0050]-AARTS discloses in step 618, a group of sample patches is generated from the identified sample area 606. Each sample patch in the group may be a rectangle of a fixed size that is twenty five percent larger than the size of each target patch. The group of sample patches is created by visiting random locations in each sample area 606, and extracting pixel information from each sample patch. Please also read paragraph [0051]).
Regarding claim 16 AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS further teaches wherein the single digital image comprises a pixel image (Fig. 4. Paragraph [0028]-AARTS discloses in step 310, the image analysis unit 114 identifies objects in the image. The image analysis unit 114 may identify objects in an image by analyzing the pixels in the image to determine lines where the pixel colors change from one color to another. The image analysis unit 114 may also identify objects by comparing areas identified in the image to a database of known images. In paragraph [0029]-AARTS discloses in step 316, the image conversion unit 116 converts the image from a two dimensional image into a three dimensional image using the dimensions of the room).
Regarding claim 17, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 16, AARTS further teaches wherein the pixel image comprises a single two-dimensional (2D) pixel image (Fig. 4. Paragraph [0029]-AARTS discloses in step 316, the image conversion unit 116 converts the image from a two dimensional image into a three dimensional image using the dimensions of the room).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over AARTS et al. (US 20120183204 A1), hereinafter referenced as AARTS in view of WEXLER (US 20160104288 A1), hereinafter referenced as WEXLER and in further view of FATHI (US 20160364885 A1), hereinafter referenced as FATHI and in further view of GAUSEBECK et al. (US 20190026958 A1), hereinafter referenced as GAUSEBECK.
Regarding claim 3, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 2, AARTS in view of WEXLER fail to explicitly teach further comprising using square footage of the floorplan while generating the three-dimensional construction image.
However, GAUSEBECK explicitly teaches further comprising using square footage of the floorplan while generating the three-dimensional construction image (Fig. 1. Paragraph [0080]- GAUSEBECK discloses measurement data (e.g., square footage, etc.) associated with surfaces can also be determined based on the derived 3D data corresponding to the respective surfaces and associated with the respective surfaces. These measurements can be displayed in association with viewing and/or navigation of the 3D floorplan model. Calculation of area (e.g., square footage) can be determined for any identified surface or portion of a 3D model with a known boundary by summing areas of polygons comprising the identified surface or the portion of the 3D model. Displays of individual items (e.g., dimensions) and/or classes of items can be toggled in a floorplan via a viewer on a remote device (e.g., via a user interface on a remote client device)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER and in further view of FATHI of having a method implemented with instructions executed by a processor, with the teachings of GAUSEBECK of having further comprising using square footage of the floorplan while generating the three-dimensional construction image.
Wherein having AARTS method having further comprising using square footage of the floorplan while generating the three-dimensional construction image.
The motivation behind the modification would have been to obtain a method that allows for the generation and manipulation of 3D models and improved measurement results, since both AARTS and GAUSEBECK concern systems and methods for generating virtual environments from two dimensional images. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while GAUSEBECK provides systems and methods for predicting 3D data from 2D data using deep learning techniques that enhances perception accuracy. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and GAUSEBECK et al. (US 20190026958 A1), Abstract and Paragraph [0123, 0132, 0160 and 0171].
Claims 4 is rejected under 35 U.S.C. 103 as being unpatentable over AARTS et al. (US 20120183204 A1), hereinafter referenced as AARTS in view of WEXLER (US 20160104288 A1), hereinafter referenced as WEXLER and in further view of FATHI (US 20160364885 A1), hereinafter referenced as FATHI and in further view of LORENZO (US 20190327413 A1), hereinafter referenced as LORENZO.
Regarding claim 4, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS in view of WEXLER fails to explicitly teach wherein the dimensionalized floorplan is wireframe image data.
However, LORENZO explicitly teaches wherein the dimensionalized floorplan (Fig. 7. Paragraph [0073]-LORENZO discloses referring now to FIG. 7, a flow diagram illustrating non-panoramic image transfer. FIG. 7 shows the process flow between a non-360 degree image capture device 116, an image processing device 600, and a 2D floor plan and 3D model 704. In paragraph [0076]-LORENZO discloses at block 724, the non-360 degree image capture device 116 captures one or more non-360 degree images. Non-360 degree images 724 may be transferred one at a time as captured images 728. In paragraph [0077]-LORENZO discloses the image processing device 600 then extracts 2D X-Y coordinates and orientations from the received image or images 728. In paragraph [0078]-LORENZO discloses once the 2D coordinates and orientations have been determined 732, the X-Y coordinates are converted into 3D model coordinates 736) is wireframe image data (Fig. 7. Paragraph [0079]-LORENZO discloses next, the captured images and model viewpoints are uploaded to the 2D floor plan 744 at the corresponding 2D X-Y coordinates (wherein the 2D floorplan is a dimensionalized floorplan as wireframe image data). Please also see Fig. 3 and 5 and read paragraph [0074]).
PNG
media_image1.png
927
710
media_image1.png
Greyscale
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER and in further view of FATHI of having a method implemented with instructions executed by a processor, with the teachings of LORENZO of having wherein the dimensionalized floorplan is wireframe image data.
Wherein having AARTS method having wherein the dimensionalized floorplan is wireframe image data.
The motivation behind the modification would have been to obtain a method that allows for the generation, manipulation and annotation of 3D models, since both AARTS and LORENZO concern systems and methods for generating virtual environments from two dimensional images. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while LORENZO provides systems and methods provide for the creation of annotations on building floor plans, which allows for quick comparisons between photographs and corresponding model viewpoints in 3D models of the same building. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and LORENZO (US 20190327413 A1), Abstract and Paragraph [0028].
Claims 12 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over AARTS et al. (US 20120183204 A1), hereinafter referenced as AARTS in view of WEXLER (US 20160104288 A1), hereinafter referenced as WEXLER and in further view of FATHI (US 20160364885 A1), hereinafter referenced as FATHI and in further view of Buzz et al. (US 20140244338 A1), herein referenced as Buzz.
Regarding claim 12, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 1, AARTS further teaches further comprising: identifying a reference dimension within the digital image (Fig. 3. Paragraph [0027]-AARTS discloses in step 306, information concerning the captured image is gathered by the information gathering unit 112. Please also read [0028]);
calculating the length of the reference dimension (Fig. 3. Paragraph [0028]-AARTS discloses in step 310, the image analysis unit 114 identifies objects in the image. The image analysis unit 114 may identify objects in an image by analyzing the pixels in the image to determine lines where the pixel colors change from one color to another. The image analysis unit 114 may also identify objects by comparing areas identified in the image to a database of known images);
converting the length of the reference dimension into a pixel equivalency dimension (Fig. 3. Paragraph [0029]-AARTS discloses in step 316, the image conversion unit 116 converts the image from a two dimensional image into a three dimensional image using the dimensions of the room. The image conversion unit 116 generates a three dimensional plane for each wall of the room, and stores these planes in the memory 210. In addition, the image conversion unit 116 converts each object in the room into a three dimensional object by relating the dimensions of each object to the dimensions of the room and the position of each object within the room. Further in paragraph [0043]- AARTS discloses the image analysis unit 114 may consistently adjust the lengths of walls and other object dimensions to ensure the accuracy of the image is maintained. Please also read paragraph [0055]);
AARTS in view of WEXLER fails to explicitly teach and using the reference dimension to produce a three-dimensional context-rich takeoff package.
However, Buzz explicitly teaches and using the reference dimension to produce a three-dimensional (Fig. 1. Paragraph [0014]-Buzz discloses the Image tab 108 provides a view of a construction drawing, such as the construction drawing 120. Further in paragraph [0017]-Buzz discloses the two dimensional image shown in the drawing 120 can also be three dimensional (3D) such as a 3D CAD drawing, Building Information Model (BIM), or the like)) context-rich takeoff package (Fig. 1. Paragraph [0014]-Buzz discloses a dropdown menu 104 allows the user to select which page or area of the construction project to view. The selected area can be any part of the construction project for which a separate construction drawing exists. For example, the dropdown menu 104 in screen 100 shows that the 2.sup.nd Floor Plan has been selected. As such the corresponding drawing 120 includes all the various building conditions in the 2.sup.nd Floor area of this construction project).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER and in further view of FATHI of having a method implemented with instructions executed by a processor, with the teachings of Buzz of having wherein the three-dimensional context-rich takeoff package comprises a three-dimensional construction image and a corresponding materials takeoff list.
Wherein having AARTS method having wherein the three-dimensional context-rich takeoff package comprises a three-dimensional construction image and a corresponding materials takeoff list.
The motivation behind the modification would have been to obtain a method that improves the accuracy for generating 3D models, since both AARTS and Buzz concern systems and methods for generating, displaying and analyzing three dimensional images in the context of building. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while Buzz’s systems and methods allow for a detailed takeoff packages based on multiple building conditions to be produced for a three-dimensional digital representation of a construction project. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and Buzz et al. (US 20140244338 A1), Abstract and Paragraph [0014-0015 and 0017].
Regarding claim 14, AARTS in view of WEXLER and in further view of FATHI and in further view of BUZZ explicitly teach the method of claim 12, AARTS further teaches wherein pixel walking is used calculating the length of the reference dimension (Fig. 5. Paragraph [0036]-AARTS discloses FIG. 5 illustrates a process of determining the dimensions of a room from an image 400. In step 502, the information gathering unit 112 presents the image 400 to a user via the GUI 232 on the client device 104/106. In step 504, the information gathering unit 114 receives basic dimensional information of the image via the GUI 232, and from the information stored in the image such as EXIF information (wherein dimensional information may include the height h of the room 400 depicted in the image 400, the angles a and b between walls 402, 404 and 406, the length of lines that form intersections between walls, ceiling, and the floor 408, a depth dimension, such as the width of an object or feature on the walls). The image analysis unit 114 may also use a line analysis algorithm to identify where the lines that forms the intersections between the walls (wherein the line analysis algorithm may include a Hough transform algorithm). Further in paragraph [0047]-AARTS discloses in step 612, the image analysis unit 114 identifies objects 604 within the removal area 602. The image analysis unit 114 may use any known object identification technique such as edge detection. In paragraph [0048]-AARTS discloses in step 614, the image analysis unit 114 divides the removal area 602 into target patches. In paragraph [0049]-AARTS discloses in step 616, the image analysis unit 114 identifies the traversal order of the target patches in the removal area 602. The target patch traversal order may be based on the amount of pixel information available on the borders of each target patch).
Regarding claim 15, AARTS in view of WEXLER and in further view of FATHI and in further view of BUZZ explicitly teach the method of claim 12, AARTS fails to explicitly teach wherein the three-dimensional context-rich takeoff package comprises a three-dimensional construction image and a corresponding materials takeoff list.
However, Buzz explicitly teaches wherein the three-dimensional context-rich takeoff package (Fig. 1. Paragraph [0014]-Buzz discloses a dropdown menu 104 allows the user to select which page or area of the construction project to view. The selected area can be any part of the construction project for which a separate construction drawing exists. For example, the dropdown menu 104 in screen 100 shows that the 2.sup.nd Floor Plan has been selected. As such the corresponding drawing 120 includes all the various building conditions in the 2.sup.nd Floor area of this construction project) comprises a three-dimensional construction image (Fig. 1. Paragraph [0014]-Buzz discloses the Image tab 108 provides a view of a construction drawing, such as the construction drawing 120. Further in paragraph [0017]-Buzz discloses the two dimensional image shown in the drawing 120 can also be three dimensional (3D) such as a 3D CAD drawing, Building Information Model (BIM), or the like)) and a corresponding materials takeoff list (Fig. 5A-B and 6, #116 called a conditions list. Paragraph [0034]-Buzz discloses in the screen 400 of FIG. 5A, four building conditions have been selected. These include 24''.times.24'' Ceramic Tile Floor, Ceramic Tile Base, Paint Walls--Epoxy 9', and Paint GWB Ceilings).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER and in further view of FATHI and in further view of Buzz of having a method implemented with instructions executed by a processor, with the teachings of Buzz of having wherein the three-dimensional context-rich takeoff package comprises a three-dimensional construction image and a corresponding materials takeoff list.
Wherein having AARTS method having wherein the three-dimensional context-rich takeoff package comprises a three-dimensional construction image and a corresponding materials takeoff list.
The motivation behind the modification would have been to obtain a method that improves the accuracy for generating 3D models, since both AARTS and Buzz concern systems and methods for generating, displaying and analyzing three dimensional images in the context of building. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while Buzz’s systems and methods allow for a detailed takeoff packages based on multiple building conditions to be produced for a three-dimensional digital representation of a construction project. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and Buzz et al. (US 20140244338 A1), Abstract and Paragraph [0014-0015 and 0017].
Claim 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over AARTS et al. (US 20120183204 A1), hereinafter referenced as AARTS in view of WEXLER (US 20160104288 A1), hereinafter referenced as WEXLER and in further view of FATHI (US 20160364885 A1), hereinafter referenced as FATHI and in further view of Buzz et al. (US 20140244338 A1), herein referenced as Buzz and in further view of HALLIDAY et al. (US 20170132835 A1), hereinafter referenced as HALLIDAY.
Regarding claim 13, AARTS in view of WEXLER and in further view of FATHI and in further view of Buzz explicitly teach the method of claim 12, AARTS fails to explicitly teach further comprising validating the pixel equivalency dimension by using objects of standard dimensions.
However, Halliday explicitly teaches further comprising validating the pixel equivalency dimension by using objects of standard dimensions. (Fig. 3. Paragraph [0035]-Halliday discloses in step 303, perimeter boundaries for the identified architectural element(s) are defined by correlating, for example, perimeter points, vertices, corner points, edges or specific salient pixels (e.g., neighboring pixels with noted changes in contrast, density or color, side pixels, corner pixels, etc.) of the defined architectural element within the ground-based image to the corresponding boundaries (represented by x, y, z positions) within the 3D building model (e.g., within two images having a common element). Pixel positions are extrapolated from vertices/edges of the ground-level image. Further in paragraph [0036]-Halliday discloses in step 304, dimensional ratios of distances spanning the width and length of the identified architectural element are determined using, for example, image processing system 100 of FIG. 1. Additionally in paragraph [0037]-Halliday discloses in step 305, determined ratios are compared to known standard architectural element dimensional ratios (width-to-height, width-to-height or area). Moreover in paragraph [0041]-Halliday discloses in step 306, the ratio which is closest to a known standard architectural element dimensional ratio is used as a scale. And in step 308, the scale is used to scale/rescale a multi-dimensional (e.g., 2D/3D) building model)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER and in further view of FATHI and in further view of BUZZ of having a method implemented with instructions executed by a processor, with the teachings of HALLIDAY of having further comprising using square footage of the floorplan while generating the three-dimensional construction image.
Wherein having AARTS method having further comprising using square footage of the floorplan while generating the three-dimensional construction image.
The motivation behind the modification would have been to obtain a method that improves the accuracy for generating 3D models, since both AARTS and HALLIDAY concern systems and methods for generating virtual environments from images. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while HALLIDAY provides systems and methods that enhance the accuracy of scaling when constructing a labeled and dimensioned multi-dimensional (e.g., 3D) building model from building object imagery. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and HALLIDAY et al. (US 20170132835 A1), Abstract and Paragraph [0031, 0040 and 0042].
Regarding claim 18, AARTS in view of WEXLER and in further view of FATHI explicitly teach the method of claim 17, although AARTS explicitly teaches the single 2D pixel image (Fig. 4. Paragraph [0029]-AARTS discloses in step 316, the image conversion unit 116 converts the image from a two dimensional image into a three dimensional image using the dimensions of the room).
AARTS in view of WEXLER fails to explicitly teach wherein the single 2D pixel image comprises a blueprint.
However, HALLIDAY explicitly teaches wherein the single pixel image comprises a blueprint (Fig. 4. Paragraph [0097]-HALLIDAY discloses referring back to step 418 (FIG. 4), a 3D blue print with dimensions and/or labels is created. The 3D blueprint can be returned to the capture device (e.g., smartphone) to be displayed to the user or be displayed to a potential homeowner as part of a remodeling or repair proposal (estimate)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of AARTS in view of WEXLER and in further view of FATHI and in further view of BUZZ of having a method implemented with instructions executed by a processor, with the teachings of HALLIDAY of having wherein the single pixel image comprises a blueprint.
Wherein having AARTS method having wherein the single 2D pixel image comprises a blueprint.
The motivation behind the modification would have been to obtain a method that improves the accuracy for generating 3D models, since both AARTS and HALLIDAY concern systems and methods for generating virtual environments from images. Wherein AARTS provides systems and methods for generating and manipulating an accurate and scaled virtual environment from two dimensional images, while HALLIDAY provides systems and methods that enhance the accuracy of scaling when constructing a labeled and dimensioned multi-dimensional (e.g., 3D) building model from building object imagery. Please see AARTS et al. (US 20120183204 A1), Abstract and Paragraph [0019, 0032 and 0055] and HALLIDAY et al. (US 20170132835 A1), Abstract and Paragraph [0031, 0040 and 0042].
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
Lee et al. (US 20180268220 A1)- Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc...................Please see Fig. 1A-C. Abstract.
Tang et al. (US 11551422 B2)-Various implementations disclosed herein include
devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data..................Please see Fig. 4-8. Abstract.
PATE et al. (US 20120259743 A1)- The disclosed system and method provide the user with the capability of selecting from among various particular products utilized in the design of an interior space, such as a kitchen, reviewing product specifications, design and finish combinations, and visualizing the products, both in isolation and placed in a photorealistic depiction of the selected products in one of a selected number of different room layouts. The system provides the user with the option of creating a unique account, including contact information, and information relating to one or more product selections preferred by the customer, which may then be electronically transmitted by the customer to a selected location remote from the system. The system and method may also provide the capability for a limited group, such as product manufacturer and/or dealer personnel, to access one or more of the customers' unique accounts to obtain contact information and/or product preferences for the customers....................Please see Fig. 4-5 and 12. Abstract.
LI et al. (US 20210064216 A1)- Techniques are described for using computing devices to perform automated operations involved in analysis of images acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information including a floor map of the building, such as from an analysis of multiple 360° spherical panorama images acquired at various viewing locations within the building (e.g., using an image acquisition device with a spherical camera having one or more fisheye lenses to capture a panorama image that extends 360 degrees around a vertical axis)—the generating may be further performed without detailed information about distances from the images' viewing locations to objects in the surrounding building...................Please see Fig. 4-8. Abstract.
SHUSTER et al. (US 20170186228 A1)- An apparatus, method and system facilitate efficient creation of virtual places and provide tools for using the virtual places. The virtual places include a virtual real estate listing, newsworthy place and a virtual box seat. Tools are provided including an automatic declutter tool and a staging tool.…................. Please see Fig. 1, 3-4 and 5. Abstract.
Pitzer et al. (US 20140267717 A1)- A method for generating a floor plan of a room uses a mobile electronic device and a range finder to produce approximate and modified floor plans. The method includes receiving a plurality of strokes corresponding to walls in a room with a gesture input device, generating an approximate floor plan of the room based on the strokes, generating an approximate floor plan of the room with reference to the plurality of strokes, receiving an input gesture corresponding to one wall in the approximate floor plan of the room for measurement, receiving measurement data from a range finder corresponding to a dimension of the selected one wall, modifying the approximate floor plan with reference to the measurement data from the range finder, and generating with a display of the modified floor plan of the room...…................. Please see Fig. 2. Abstract.
NAGAR et al. (US 20160300293 A1)- Embodiments of the present disclosure are directed to methods, systems and devices for designing a commercial or residential space via a design application. For example, in some embodiments, a method is disclosed which enables a user to input information, including, for example, photos or video of the space, lighting, color(s). user preferences, measurements and the configuration and/or location of openings in the space. In such embodiments, the user can select a design theme, style or designer, and based on the information input (or acquired), the method presents recommendations of a new design for the space, which may include recommendations of products to furnish the space. Further embodiments also include enabling the user to purchase such products, and may also allow the user to hire service personnel to construct the recommended design and/or install selected/purchased products…................. Please see Fig. 1-6. Abstract.
CALMAN et al. (US 8668498 B2)- System, method, and computer program product are provided for using real-time video analysis, such as augmented reality, to assist a user of a mobile device with interior design. Through the use of real-time object recognition features, logos, artwork, products, locations, etc. can be recognized in a real-time video stream and can subsequently be matched with data associated with such to assist the user with selecting the proper design elements for a space, such as a kitchen. The proper design elements may be based off of several factors regarding the design space, include the dimensions of the space, the location of windows, doors and outlets, geographic and positional data, other space features, current design elements, architectural features, decor, and style data of the space, etc. This invention provides a virtual area with design elements, such that a user may view the area and determine the proper decor for the area...…................. Please see Fig. 4-6. Abstract.
Hodgkins et al. (US 20220121785 A1)-A system and method for determining material take-off from a 2D drawing is provided. A pre-processing component receives and pre-process drawings before they are categorised by a categoriser component by way of pre trained convolutional neural networks to determine the type of the processed image from one or more categories of drawing types. A material identifier component determines the probability that a feature in the processed image is present and an output component provides a unique identifier for each feature; a list of coordinates indicating the location of the feature on the processed image; and/or a list of coordinates describing the location of any text or other encoded information that is associated with the feature....................Please see Fig. 2-3. Abstract.
Golparvar-Fard et al. (US 9070216 B2)-A method for monitoring construction progress may include storing in memory multiple unordered images obtained from photographs taken at a site; melding the multiple images to reconstruct a dense three-dimensional (3D) as-built point cloud model including merged pixels from the multiple images in 3D space of the site; rectifying and transforming the 3D as-built model to a site coordinate system existing within a 3D as-planned building information model ("as-planned model"); and overlaying the 3D as-built model with the 3D as-planned model for joint visualization thereof to display progress towards completion of a structure shown in the 3D as-planned model. The processor may further link a project schedule to the 3D as-planned model to generate a 4D chronological as-planned model that, when visualized with the 3D as-built point cloud, provides clash detection and schedule quality control during construction.................. Please see Fig. 2-5, 12, 20 and 29. Abstract.
Sasson et al. (US 20190180433 A1)- Systems and methods for annotation of construction site images are provided. For example, image data captured from a construction site using at least one image sensor may be obtained. Further, at least one construction plan associated with the construction site and including information related to an object may be obtained. The at least one construction plan may be analyzed to identify a first region of the image data corresponding to the object. The at least one display device may be used to present at least part of the image data to a user with an indication of the identified first region of the image data corresponding to the object. Further, the at least one display device may be used to present to the user a query related to the object. A response to the query may be received from the user. The response may be used to update information associated with the object in at least one electronic record associated with the construction site................. Please see Fig. 6, 9 and 10A and 11-12. Abstract.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673