DETAILED ACTION
The office action is responsive to an application filed on 11/9/22 and is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the
basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4 and 13-16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Park et al. (WO 2021176262).
With respect to claim 1, Park et al. discloses “A method of generating translated architectural plans” as [Park et al. (paragraph [0005] “Embodiments of the disclosed techniques disclose methods for planning an indoor radio network for a building. In one embodiment, a method comprises preprocessing an image of a floor plan of the building; generating a radio propagation map for the floor plan using the preprocessed image; and determining an indoor radio transmitter distribution for the floor plan using the radio propagation map”, Park et al. (paragraph [0032] “Currently when building an indoor cellular network (also referred to as an indoor radio network), indoor radio design engineers use complex simulation tools to predict the radio propagation pattern from new radio transmitter installations in a given floor plan.”)];
“inputting an architectural plan into a trained machine learning model” as [Park et al. (paragraph [0045] “A machine-learning model is used to translate a floor plan to a radio propagation map (e.g., a heatmap), and Figures 2A-B show an exemplary machine learning model for producing a heatmap per one embodiment. The machine-learning model may be a conditional Generative Adversarial Network (cGAN) discussed herein above. Figure 2A shows a generator of the neural network and Figure 2B shows a discriminator of the neural network. Image X (at references 202 and 212) is the input image, which includes a floor plan, and image Y (at references 204 and 214) is the corresponding heatmap used in training the machine-learning model.”, Figs. 2A-2B)];
“and receiving a translated architectural plan as an output from the trained machine learning model.” as [Park et al. (paragraph [0046] “An objective of the machine-learning model is to translate a floor plan image to a heatmap image, and it is closely related to the colorization problem. The colorization of a given sketch of floor plan not only needs to preserve its border shape, but also needs to learn from its internal structure to generate the correct signal heatmap…….Yet a cGAN learns the mapping from an input image and random noise to the corresponding output image. Thus, one embodiment adopts cGAN but drops the random noise as the neural network architecture, since the focus is on generating one heatmap with optimal radio dot placement.”)];
With respect to claim 2, Park et al. discloses “wherein the trained machine learning model comprises a generative adversarial network.” as [Park et al. (paragraph [0045] “A machine-learning model is used to translate a floor plan to a radio propagation map (e.g., a heatmap), and Figures 2A-B show an exemplary machine learning model for producing a heatmap per one embodiment. The machine-learning model may be a conditional Generative Adversarial Network (cGAN) discussed herein above. Figure 2A shows a generator of the neural network and Figure 2B shows a discriminator of the neural network. Image X (at references 202 and 212) is the input image, which includes a floor plan, and image Y (at references 204 and 214) is the corresponding heatmap used in training the machine-learning model.”, Figs. 2A-2B)];
With respect to claim 3, Park et al. discloses “wherein the trained machine learning model is trained by providing a paired training set of architectural plan images.” as [Park et al. (paragraph [0045] “A machine-learning model is used to translate a floor plan to a radio propagation map (e.g., a heatmap), and Figures 2A-B show an exemplary machine learning model for producing a heatmap per one embodiment. The machine-learning model may be a conditional Generative Adversarial Network (cGAN) discussed herein above. Figure 2A shows a generator of the neural network and Figure 2B shows a discriminator of the neural network. Image X (at references 202 and 212) is the input image, which includes a floor plan, and image Y (at references 204 and 214) is the corresponding heatmap used in training the machine-learning model.”, Figs. 2A-2B)];
With respect to claim 4, Park et al. discloses “wherein the paired training set comprises corresponding architectural plan and translated architectural plan pairs.” as [Park et al. (paragraph [0039] “In an exemplary implementation, the system trains a conditional Generative Adversarial Network (cGAN) to predict the desirable radio transmitter placement using a large number of data sets, each including (1) a floor plan and (2) its corresponding radio propagation map (which is presumably optimal as determined by radio design engineers). For example, a pixel-to-pixel network may be trained using pairs of floor plans and their radio propagation maps based on either optimal radio transmitter placement by human designers using industry standard indoor radio planning tools or from radio measurements collected from already deployed radio transmitters for various floor plans.”, Park et al. paragraph [0045] ““A machine-learning model is used to translate a floor plan to a radio propagation map (e.g., a heatmap), and Figures 2A-B show an exemplary machine learning model for producing a heatmap per one embodiment. The machine-learning model may be a conditional Generative Adversarial Network (cGAN) discussed herein above. Figure 2A shows a generator of the neural network and Figure 2B shows a discriminator of the neural network. Image X (at references 202 and 212) is the input image, which includes a floor plan, and image Y (at references 204 and 214) is the corresponding heatmap used in training the machine-learning model.”, Figs. 2A-2B)];
With respect to claim 13, Park et al. discloses “A computer system for generating translated architectural plans, the system comprising: at least one processor and a memory having stored thereon instructions” as [Park et al. (paragraph [0006] “Embodiments of the disclosed techniques disclose electronic devices for planning an indoor radio network for a building. In one embodiment, an electronic device comprises a processor and non-transitory machine-readable storage medium storing instructions, which when executed by the processor”, Park et al. (paragraph [0027] “Figure 15 is a flow diagram showing the operations implemented in a communication system including a host computer, a base station, and a user equipment per some embodiments.”)];
The other limitations of the claim recite the same substantive limitations as claim 1 above, and are rejected using the same teachings.
With respect to claims 14-16, the claims recite the same substantive limitations as claims 2-4 above, and are rejected using the same teachings.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35
U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5-12 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (WO 2021176262) in view of Aldeborgh et al. (U.S. PGPub 20210156693).
With respect to claim 5, Park et al. discloses “splitting the translated architectural plan to generate translated layers” as [Park et al. (paragraph [0047] “In order to preserve most of the floor plan structure, one embodiment adopts the U-Net architecture as the generator. Each convolutional layer extracts features from the previous layer and passes it to the next layer. The shallow layers are responsible for extracting low-level features from a given image such as different type of lines. The middle layers are responsible for extracting mid-level features such as shape and texture. The deep layers are responsible for extracting high-level features such as object, composition of different shapes, or even more complicated signals. In Figure 2A, the layers go from shallow to deep from left to right.”, Figs. 2A and 2B];
“and post processing the translated layers to produce processed translated layers.” as [Park et al. (paragraph [0048] “The encoder here serves the same purpose of extracting low-level to high-level features. But unlike the generator, the goal for the discriminator is to classify between a real heatmap and a fake heatmap from the generator. Therefore, the deep layer features are sufficient to achieve this task. In one embodiment, the PatchGAN architecture is used to output a matrix of probabilities for the final layer in the discriminator to show whether each section of the image can be classified as the real image or not.”)];
While Park et al. teaches splitting the translated architectural plan to generate translated layers and post processing the translated layers to produce processed translated layers, Park et al. doesn’t explicitly disclose “wherein the translated architectural plan comprises at least two classes of architectural features”
Aldeborgh et al. discloses “wherein the translated architectural plan comprises at least two classes of architectural features” as [Aldeborgh et al. (paragraph [0011] “For example, the architectural floor plan may include a file (e.g., a bitmap image file, a raster image file, a computer-aided design (CAD) drawing, and/or the like) that represents an interior of the building (e.g., an image of an interior floor plan showing an overhead view of rooms, walls, dividers, spaces, objects, obstacles, and/or other physical features at one level of the building, such as an office, a floor of the building, a level of a public venue, and/or the like).”, Aldeborgh et al. paragraph [0012] “In some implementations, navigation platform 115 may utilize a vectorization model that converts the architectural floor plan (e.g., from an image format) into a vector representation of the architectural floor plan. For example, the vectorization model may convert a two-dimensional image (e.g., the architectural floor plan) into a two-dimensional vector representation of the image. The vector representation may be provided in a vector file format (e.g., a scalable vector graphics (SVG) format, an encapsulated postscript (EPS) format, and/or the like) and may represent the architectural floor plan as a set of vectors or shapes (e.g., polygons), as shown in FIG. 1B.”, Aldeborgh et al. paragraph [0013] “The machine learning model may recognize the extraneous features based on metadata included in the vectorized floor plan, based on image recognition of the extraneous features, and/or the like. In some implementations, an extraneous feature may include a representation of a doorway, textual information, one or more compass arrows, an architectural icon that does not represent an actual physical feature of the building, and/or the like.”, The examiner considers the doorway and walls to be the two classes, since an architectural feature class can be walls, windows or a doorway, see paragraph [0066] of the specification)];
Park et al. and Aldeborgh et al. are analogous art because they are from the same field endeavor of analyzing a floor plan of a building.
Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Park et al. of splitting the translated architectural plan to generate translated layers and post processing the translated layers to produce processed translated layers by incorporating wherein the translated architectural plan comprises at least two classes of architectural features as taught by Aldeborgh et al. for the purpose of generating paths on the interior of a building.
Park et al. in view of Aldeborgh et al. teaches wherein the translated architectural plan comprises at least two classes of architectural features.
The motivation for doing so would have been because Aldeborgh et al. teaches that by generating paths on the interior of a building, the ability to know the exact location of a client device associated with a user can be accomplished. This helps a user in navigating a floor plan of a building (Aldeborgh et al. paragraph [0006] – [0007]).
With respect to claim 6, the combination of Park et al. and Aldeborgh et al. discloses the method of claim 5 above, and Aldeborgh et al. further discloses “wherein post processing comprises contourization, generating a vectorized architectural plan.” as [Aldeborgh et al. (paragraph [0013] “As shown in FIG. 1C, and by reference number 130, navigation platform 115 may process the vectorized floor plan, with a machine learning model, to remove extraneous features and to generate a processed vectorized floor plan. The machine learning model may recognize the extraneous features based on metadata included in the vectorized floor plan, based on image recognition of the extraneous features, and/or the like.”, Fig. 1C)];
With respect to claim 7, the combination of Park et al. and Aldeborgh et al. discloses the method of claim 5 above, and Park et al. further discloses “wherein post processing comprises room segmentation.” as [Park et al. (paragraph [0035] “For example, some embodiments partition a given floor plan into optimal sections and then employ a layered pipeline (also referred to as a network), to generate synthetic images of radio propagation for a given frequency (or frequency band) and radio transmitter placement using a generative network (also referred to as a generator) for each section, while ensuring accuracy of the generated image through a self-correcting feedback loop (e.g., using a so-called discriminator network or simply discriminator). The generative network and discriminator network may be used in a generative adversarial network (GAN) implementation to arrive at a desirable indoor radio transmitter distribution.”)];
With respect to claim 8, the combination of Park et al. and Aldeborgh et al. discloses the method of claim 5 above, and Park et al. further discloses “wherein the translated layers comprise at least one of walls, windows or doors.” as [Park et al. (paragraph [0036] “The training may use machine-learning (e.g., using neural networks such as a GAN). The discriminator network then uses the data presented to it to learn statistically significant phenomena related to placement of radio transmitters operating at a given frequency and the placement’s relationship to factors such as radio propagation, operating power, floor shape, wall layout, wall material, furniture density, orientation, and connectivity between rooms on the floor.”, Park et al. paragraph [0047] “In order to preserve most of the floor plan structure, one embodiment adopts the U-Net architecture as the generator. Each convolutional layer extracts features from the previous layer and passes it to the next layer. The shallow layers are responsible for extracting low-level features from a given image such as different type of lines. The middle layers are responsible for extracting mid-level features such as shape and texture. The deep layers are responsible for extracting high-level features such as object, composition of different shapes, or even more complicated signals.”, Figs. 2A and 2B)];
With respect to claim 9, the combination of Park et al. and Aldeborgh et al. discloses the method of claim 5 above, and Aldeborgh et al. further discloses “manually correcting the translated layers.” as [Aldeborgh et al. (paragraph [0010] “In some implementations, client device 105 may include a mobile device, a computer, a telephone, and/or the like that a user may utilize to cause server device 110 to provide information to navigation platform 115. The user may also utilize client device 105 to interact with and/or receive information from navigation platform 115.”, Aldeborgh et al. paragraph [0029] “In some implementations, navigation platform 115 may receive information indicating a modification to the architectural floor plan of the interior of the building, and modify at least one of the identified paths based on the modification to the architectural floor plan. In some implementations, navigation platform 115 may retrain one or more of the vectorization model, the machine learning model, the convex hull model, or the pathfinding model based on the identified paths.”, The examiner considers the user modifying the architectural floor plan as being correcting the translated layer, since the translated layer includes the vectorization of the floor plan and the navigation platform can retain the vectorization model)];
With respect to claim 10, Park et al. discloses the method of claim 1 above.
While Park et al. teaches inputting an architectural plan into a trained machine learning model and receiving a translated architectural plan as an output from the trained machine learning model, Park et al. does not explicitly disclose “performing contourization on the translated architectural plan, generating a vectorized architectural plan.”
Aldeborgh et al. discloses “performing contourization on the translated architectural plan” as [Aldeborgh et al. (paragraph [0023] “In some implementations, when generating the visibility graph, navigation platform 115 may connect pairs of points between which lines can be drawn, without touching the simplified convex hull polygons and without going over an outer edge of the exterior of the building, to generate the visibility graph. In this way, navigation platform 115 may create a visibility graph that represents potential lines of sight within the interior of the building, as further shown in FIG. 1F.”)];
“generating a vectorized architectural plan.” as [Aldeborgh et al. (paragraph [0012] “As shown in FIG. 1B, and by reference number 125, navigation platform 115 may process the architectural floor plan to generate a vectorized floor plan of polygons. In some implementations, navigation platform 115 may utilize a vectorization model that converts the architectural floor plan (e.g., from an image format) into a vector representation of the architectural floor plan. For example, the vectorization model may convert a two-dimensional image (e.g., the architectural floor plan) into a two-dimensional vector representation of the image. The vector representation may be provided in a vector file format (e.g., a scalable vector graphics (SVG) format, an encapsulated postscript (EPS) format, and/or the like) and may represent the architectural floor plan as a set of vectors or shapes (e.g., polygons), as shown in FIG. 1B. The vector representation may correspond to the vectorized floor plan of polygons.”)];
Park et al. and Aldeborgh et al. are analogous art because they are from the same field endeavor of analyzing a floor plan of a building.
Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Park et al. of inputting an architectural plan into a trained machine learning model and receiving a translated architectural plan as an output from the trained machine learning model by incorporating performing contourization on the translated architectural plan, generating a vectorized architectural plan as taught by Aldeborgh et al. for the purpose of generating paths on the interior of a building.
Park et al. in view of Aldeborgh et al. teaches performing contourization on the translated architectural plan, generating a vectorized architectural plan.
The motivation for doing so would have been because Aldeborgh et al. teaches that by generating paths on the interior of a building, the ability to know the exact location of a client device associated with a user can be accomplished. This helps a user in navigating a floor plan of a building (Aldeborgh et al. paragraph [0006] – [0007]).
With respect to claim 11, the combination of Park et al. and Aldeborgh et al. discloses the method of claim 10 above, and Aldeborgh et al. further discloses “splitting the vectorized architectural plan into vectorized layers.” as [Aldeborgh et al. (paragraph [0016] “In this case, navigation platform 115 may perform binary recursive partitioning to split the historical data into partitions and/or branches and use the partitions and/or branches to determine outcomes (e.g., that a feature of a vectorized floor plan is an extraneous feature).”)];
With respect to claim 12, the combination of Park et al. and Aldeborgh et al. discloses the method of claim 11 above, and Aldeborgh et al. further discloses “wherein the vectorized layers comprise at least one of walls, windows or doors.” as [Aldeborgh et al. (paragraph [0011] “For example, the architectural floor plan may include a file (e.g., a bitmap image file, a raster image file, a computer-aided design (CAD) drawing, and/or the like) that represents an interior of the building (e.g., an image of an interior floor plan showing an overhead view of rooms, walls, dividers, spaces, objects, obstacles, and/or other physical features at one level of the building, such as an office, a floor of the building, a level of a public venue, and/or the like).”)];
With respect to claim 17-20, the claims recite the same substantive limitations as claims 5-6 and 8-9 above, and are rejected using the same teachings.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The relevance of Bergin et al. (U.S. PGPub 2020/0151923) is a method and system provide the ability to parametrize a sketch, where a sketch is acquired and includes raster lines that define a raster image based floor-plan sketch. Vectorized geometry is generated from the sketch dynamically in real time based on raster lines.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BERNARD E COTHRAN whose telephone number is (571)270-5594. The examiner can normally be reached 9AM -5:30PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan F Pitaro can be reached at (571)272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BERNARD E COTHRAN/Examiner, Art Unit 2188
/RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188