Prosecution Insights
Last updated: April 19, 2026
Application No. 18/405,696

ELECTRONIC DEVICE FOR GENERATING A FLOOR MAP IMAGE HANDWRITTEN IMAGE AND METHOD FOR CONTROLLING THE SAME

Final Rejection §103
Filed
Jan 05, 2024
Examiner
WANG, JIN CHENG
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
69%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
492 granted / 832 resolved
-2.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
40 currently pending
Career history
872
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 832 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/03/2025 have been fully considered but they are not persuasive. In Remarks, applicant argued with the claim limitations set forth in the originally filed claims 3 and 4 (which are now cancelled). Applicant completely ignored Sfar’s disclosures in relation to the claim invention.Sfar teaches the disputed claim limitation. Therefore, applicant’s arguments are completely irrelevant in light of an obviousness type of rejection based on the Sfar reference. However, the cited references overall teach the claim limitation. For example, Sfar teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on an intersection being present between a first straight line and a second straight line, among the plurality of straight lines, identify a first corner with respect to the intersection of the first straight line and the second straight line ( Sfar teaches detecting the first corner between two line segments based on the distance between the two line segments below a predefined threshold. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments.), and based on an intersection being not present between a third straight line and a fourth straight line, among the plurality of straight lines, determine whether the third straight line and the fourth straight line form the first corner based on a distance between the third straight line and the fourth straight line and an angle formed by the third straight line and the fourth straight line ( Sfar teaches that if the two line segments are not colinear, the angle formed by the two line segments is larger than a threshold as shown at Paragraph 0156-0167 and based on the corner intersection between the two line segments, determining whether the two line segments form a junction based on the distance below a predefined threshold and the angle larger than a threshold (because of non-linearity). Sfar teaches based on the intersection between the two line segments being not present, determining whether the two line segments form a third corner (determining collinearity when the two line segments not forming a third corner and determining a corner formed) based on the distance between the two line segments below a predefined threshold and angle formed by the two line segments by detecting noncollinearity where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity.). Sfar teaches at Paragraph [0167] For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. Sfar teaches at Paragraph 0156 that the first and/or second predetermined collinearity threshold(s) may be defined as a threshold on the (non-oriented) angle between two line segments. Said threshold may be defined as a function of the distribution of all angles formed by two contiguous walls in the training dataset. The collinearity threshold may be defined thanks to this distribution. For example, the value of the angle such that less than 5% of the angles formed by two contiguous walls are lower than this value. If said value is high (e.g. higher than 45°), it can be lowered to 30′. In practice, a value of the order of 30′ provides good results. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Sfar teaches at Paragraph [0170] The next step may consist in constructing 3D primitives required by the 3D reconstruction API such as wall primitives, door primitives and window primitives. For instance, the wall primitive may be defined by the following attributes: coordinates of the two endpoints, thickness, height, references of the adjacent walls. Thanks to the refined mask, wall, window and door primitives may be easily built. Indeed, information such as coordinates of endpoints, reference of adjacent walls, reference of the wall to which a window (resp. door) belongs can be easily extracted from the refined mask. Other information such as wall/window/door height or width may be predefined or provided by a user). In Remarks, applicant individually attacked Hera that ay distance between two walls is not used to determine whether those two walls form a corner. Heras teaches at Section 4.1.1-4.1.3 determining whether the wall segments belong to the same wall or form different kind of junctions. Heras teaches at Section 4.1.1 that small unaligned segments form straight walls after closing the wall image by joining unconnected pixels. For example, Heras teaches at Section 4.1.3 that the wall segments belong to the same wall in the wall-segment-graph. that joining unconnected pixels and at Section 4.1.2 that geometric computations among wall segments such as distance or angles can be performed and the segments are connected by junctions where the coordinate of the junction point between the two segments and the relative angle between the two segments is represented in the wall-segment-graph. The connections between two wall segments at close to 90 degree means that the distance is small than a threshold distance and angle being larger than a threshold degree forming an L-junction. However, Heras teaches Heras teaches at FIG. 5 that based on the distance between the green line and the black line being less than a threshold distance and the angle formed being 90 degree, determine that the green line and the black line forms an L-junction and identifying the L-junction by extending one of the green line and the black line. Heras teaches at Section 4.1.3 that two wall-segments connected by a rectangle angle (90 degree) with a certain tolerance margin are considered to belong to two different walls (connected wall segments means that the distance between the two wall segments are less than a distance threshold). Heras teaches at Section 4.1.2 that an attributed graph of line segments is created using the open-source graph library. In the attributed graph, the nodes are the segments obtained from the vectorization and the edges represent binary junctions among connected nodes and the edges contain two attributes: the coordinates of the junction point between the two segments and the relative angle between them. Heras teaches at Section 4.1 that nodes are wall entities which can be seen as groups of connected wall-segments, attributed with the geometric coordinates of their end-points and edges are connections among walls at these end-points and at Section 4.3 that closed regions are found from the final graph using the optimal algorithm. Heras further teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on the distance between the third straight line and the fourth straight line being less than a first threshold distance and the angle formed by the third straight line and the fourth straight line being greater than a threshold angle, determine that the third straight line and the fourth straight line form the first corner, and identify the first corner of the third straight line and the fourth straight line by extending one of the third straight line and the fourth straight line ( Heras teaches at Section 4.1.1 that a raw vectorization of the image leads to encounter multiple corners and small unaligned segments for completely straight walls. This issue is solved by applying a morphological opening after closing the wall image, which allows to delete small noise and join unconnected pixels, and a logical AND with the original opened image to make borders straighter. Heras teaches at Section 4.1 that the wall is usually delimited by intersections with other walls. Heras teaches at FIG. 5 identifying N and L junctions (corners) based on a plurality of straight lines including the colored straight lines such as the green straight line, red straight line, blue straight line, black straight line. Heras teaches at Section 4.1.1 detecting linear elements including walls, a raw vectorization of the image leads to encounter multiple corners and small unaligned segments for completely straight walls. Heras teaches at FIG. 5 that based on the distance between the green line and the black line being less than a threshold distance and the angle formed being 90 degree, determine that the green line and the black line forms an L-junction and identifying the L-junction by extending one of the green line and the black line). Applicant misinterpreted Feltes’s teaching in relation to the claim invention. Feltes teaches determining the separation distance between the two edges to determine whether the two wall edges intersect (forming 90 degree angle) or not intersect (forming parallel edges or being wall segments of different rooms). The determination of the intersection or no intersection is based on the separation distance being smaller than a distance threshold and the angle being larger than an angle threshold. For example, Feltes further teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on the distance between the third straight line and the fourth straight line being less than a first threshold distance and the angle formed by the third straight line and the fourth straight line being greater than a threshold angle, determine that the third straight line and the fourth straight line form the first corner, and identify the first corner of the third straight line and the fourth straight line by extending one of the third straight line and the fourth straight line. Feltes teaches at FIG. 5 and Section 4 that the angles of the rectangle created by connecting the two edges should be approximately 90 degree and the two edges are aligned and the area between the two edges should be empty and the two edges will not be connected if they are separated by another wall (meaning that the angle of the two intersecting edges which are determined to be connected is greater than an angle threshold and the distance between the two parallel edges which are determined to be not connected is larger than a distance threshold, which implicitly means that the distance between two intersecting edges is smaller than a distance threshold ). Feltes further teaches resolving errors in corner point detection and gaps are closed by connecting previously detected edges. Feltes teaches at Section 3 that each wall edge is processed for closing the gaps in the probable locations of doors and windows and detected corner points are used for extraction of parallel edges and at Section 4 that gaps are closed by connecting pairs of previously detected edges and the angles of the rectangle created by connecting the two edges should be 90 degrees and the area between the two edge candidates should be empty. This ensures that the two edges will not be connected if they are separated by another wall. Feltes thus teaches that the first two edges intersect at a second corner point while the second two edges are not intersect with each other as there is no angle formed by the two edges. Feltes also teaches at FIG. 5 a third corner is detected between the third line and fourth line and the third line and fourth line forms 90 degree angle. Sfar teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on the distance between the third straight line and the fourth straight line being less than a first threshold distance and the angle formed by the third straight line and the fourth straight line being greater than a threshold angle, determine that the third straight line and the fourth straight line form the first corner, and identify the first corner of the third straight line and the fourth straight line by extending one of the third straight line and the fourth straight line (Sfar teaches that if the two line segments are not colinear, the angle formed by the two line segments is larger than a threshold as shown at Paragraph 0156-0167 and based on the corner intersection between the two line segments, determining whether the two line segments form a junction based on the distance below a predefined threshold and the angle larger than a threshold (because of non-linearity). Sfar teaches based on the intersection between the two line segments being not present, determining whether the two line segments form a third corner (determining collinearity when the two line segments not forming a third corner and determining a corner formed) based on the distance between the two line segments below a predefined threshold and angle formed by the two line segments by detecting noncollinearity where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity.). Sfar teaches at Paragraph [0167] For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. Sfar teaches at Paragraph 0156 that the first and/or second predetermined collinearity threshold(s) may be defined as a threshold on the (non-oriented) angle between two line segments. Said threshold may be defined as a function of the distribution of all angles formed by two contiguous walls in the training dataset. The collinearity threshold may be defined thanks to this distribution. For example, the value of the angle such that less than 5% of the angles formed by two contiguous walls are lower than this value. If said value is high (e.g. higher than 45°), it can be lowered to 30′. In practice, a value of the order of 30′ provides good results. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Sfar teaches at Paragraph [0170] The next step may consist in constructing 3D primitives required by the 3D reconstruction API such as wall primitives, door primitives and window primitives. For instance, the wall primitive may be defined by the following attributes: coordinates of the two endpoints, thickness, height, references of the adjacent walls. Thanks to the refined mask, wall, window and door primitives may be easily built. Indeed, information such as coordinates of endpoints, reference of adjacent walls, reference of the wall to which a window (resp. door) belongs can be easily extracted from the refined mask. Other information such as wall/window/door height or width may be predefined or provided by a user). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 7 and 11, 12, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Randolph US-PGPUB No. 2017/0091885 (hereinafter Randolph) in view of Pitzer et al. US-PGPUB No. 2014/0267717 (hereinafter Pitzer); Bergin et al. US-PGPUB No. 2020/0151923 (hereinafter Bergin ‘923); Bergin et al. US-PGPUB No. 2019/0228115 (hereinafter Bergin ‘115); R. Sfar et al. US-PGPUB No. 2019/0243928 (hereinafter Sfar); Y. Aoki, et al., “A Prototype System for Interpreting Hand-Sketched Floor Plans”, Proc. Of 13th International Conf. on Pattern Recognition, August 25-29, 1996, Vienna, Austria, pp. 747-751 (hereinafter Aoki); Akio Shio et al., “Sketch Plan: A Prototype System for Interpreting Hand-Sketched Floor Plans”, Systems and Computers in Japan, Vol. 31, No. 6, 2000, pp. 10-18 (hereinafter Shio); L. Heras, et. al., “Statistical Segmentation and Structural Recognition for Floor Plan Interpretation”, Springer, Dec. 3, 2013, pp. 221-237 (hereinafter Heras); D. Vargas, “Wall Extraction and Room Detection for Multi-Unit Architectural Floor Plans”, Master of Science Thesis, University of Cauca, Colombia, 2015, pp. 1-98 (hereinafter Vargas); Max Feltes, et al., “Improved Contour-Based Corner Detection for Architectural Floor Plans”, 10th International Workshop, GREC 2013, Bethlehem, PA, USA, August 20-21, 2013, pp. 191-203 (hereinafter Feltes). Re Claim 1: Randolph/Bergin ‘923 teaches an electronic device comprising: a memory storing at least one instruction; and at least one processor configured to execute the at least one instruction to (Randolph teaches at FIG. 6 and Paragraph 0089-0090 that the computing system 600 includes a processor 605 configured to execute instructions stored in RAM): change a plurality of non-straight lines in a handwritten image to a plurality of straight lines (Randolph teaches at Paragraph 0082-0084 that the system 100 can straighten the lines. Pitzer teaches at FIGS. 4-6 changing a plurality of non-straight lines in FIG. 4 to straight lines in FIG. 6. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Sfar teaches at Paragraph [0170] The next step may consist in constructing 3D primitives required by the 3D reconstruction API such as wall primitives, door primitives and window primitives. For instance, the wall primitive may be defined by the following attributes: coordinates of the two endpoints, thickness, height, references of the adjacent walls. Thanks to the refined mask, wall, window and door primitives may be easily built. Indeed, information such as coordinates of endpoints, reference of adjacent walls, reference of the wall to which a window (resp. door) belongs can be easily extracted from the refined mask. Other information such as wall/window/door height or width may be predefined or provided by a user. Bergin ‘115 teaches that the user’s drawn lines are not straight in the sketch 902 while the lines generated in the plan 906 are straight lines. Bergin ‘115 teaches at Paragraph [0077] FIG. 9 illustrates results from a real-time interactive generative design an optimization process in accordance with one or more embodiments of the invention. In particular, the user interface output via steps 306-308 of FIG. 3 are illustrated in FIG. 9. The user input includes a user drawing a sketch 902 and/or a bubble diagram 904. As the user input is received, in a separate layout viewport, a layout generated floorplan 906 is generated, updated, and displayed in real-time. Such a generated floor plan 906 is generated and displayed based on the user input (i.e., sketch 902/bubble diagram 904) and the knowledge base that has been created and maintained. For example, as the user is drawing a sketch 902 that includes a line representing a wall, embodiments of the invention may automatically generate a bubble diagram 904 and a floorplan 906 that may include completed walls and details such as furniture. In other words, as a user draws a sketch 902, there is a real-time auto-completion of a bubble diagram 904/floorplan 906 (e.g., in a separate window). The knowledge base enables such an auto-completion based on a recognition that thin user drawn lines may represent an interior space and double-lines may represent windows. Bergin ‘923 teaches at FIGS. 2-3 and Paragraph 0034 that [0034] Transformation is assigned to the nodes of the parametric graphs, wherein each node represents corner coordinates of the target wall elements. This transformation modifies all lines connected to the transformed node. However, to avoid distortion of the orthogonal nature of the plans, colinear paths (which connect to each other with a mutual node and share the same direction vector) are merged. Next, the array of nodes that are located on the collinear lines are identified. After applying transformations to the connected line node array, new polylines are constructed from each node array. This would result in a fully automated parametric model that takes transformation vectors and connected line indices as an input and outputs a new floorplan layout without producing undesired gaps and floorplan voids). Randolph/Pitzer/Bergin ‘923/Bergin ‘115 at least suggests the claim limitation: identify, based on the plurality of straight lines, a first corner in the handwritten image (Randolph teaches at Paragraph 0082-0084 that the system 100 can find quadrilaterals (which inherently include corners/vertices) and shapes formed by connecting straight lines. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Sfar teaches at Paragraph [0170] The next step may consist in constructing 3D primitives required by the 3D reconstruction API such as wall primitives, door primitives and window primitives. For instance, the wall primitive may be defined by the following attributes: coordinates of the two endpoints, thickness, height, references of the adjacent walls. Thanks to the refined mask, wall, window and door primitives may be easily built. Indeed, information such as coordinates of endpoints, reference of adjacent walls, reference of the wall to which a window (resp. door) belongs can be easily extracted from the refined mask. Other information such as wall/window/door height or width may be predefined or provided by a user. Bergin ‘115 teaches at Paragraph 0077 that such a generated floor plan 906 is generated and displayed based on the user input (i.e., sketch 902/bubble diagram 904) and the knowledge base that has been created and maintained. For example, as the user is drawing a sketch 902 that includes a line representing a wall, embodiments of the invention may automatically generate a bubble diagram 904 and a floorplan 906 that may include completed walls and details such as furniture. In other words, as a user draws a sketch 902, there is a real-time auto-completion of a bubble diagram 904/floorplan 906 (e.g., in a separate window). The knowledge base enables such an auto-completion based on a recognition that thin user drawn lines may represent an interior space and double-lines may represent windows. Bergin ‘115 teaches at Paragraph 0078 that FIG. 10 illustrates an implementation example of a real-time generation of a completed plan/layout 1006 from a bubble diagram 1004 in accordance with one or more embodiments of the invention. The left side 1004 illustrates a bubble diagram showing links between different bubbles representing different rooms (e.g., garage, dining, kitchen, living, bedroom, etc.). Bergin ‘923 teaches at FIGS. 2-3 and Paragraph 0034 that [0034] Transformation is assigned to the nodes of the parametric graphs, wherein each node represents corner coordinates of the target wall elements. This transformation modifies all lines connected to the transformed node. However, to avoid distortion of the orthogonal nature of the plans, colinear paths (which connect to each other with a mutual node and share the same direction vector) are merged. Next, the array of nodes that are located on the collinear lines are identified. After applying transformations to the connected line node array, new polylines are constructed from each node array. This would result in a fully automated parametric model that takes transformation vectors and connected line indices as an input and outputs a new floorplan layout without producing undesired gaps and floorplan voids and at Paragraph 0045 that the system identifies all the corresponding intersection nodes included in the target wall 308 axes), identify, based on the first comer, a plurality of spaces in the handwritten image (Randolph teaches at Paragraph 0082-0086 that the system 100 can find quadrilaterals (which inherently include corners/vertices) and shapes/spaces formed by connecting straight lines. This can result in a vectorized floor plan and the inspection application 112 can then add labels, such as room labels and dimensions to the vectorized floor plan. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Bergin ‘923 shows at FIG. 2 the plurality of spaces formed by the walls with respect to 204A-204C of FIG. 2 based on the sketches 202A-202C by connecting the walls and corners of the walls to form room space models 204A-204C and at Paragraph 0044 that the walls 304A-304C are identified to correspond to the user drawn 302A-302C. Bergin ‘923 teaches at FIGS. 2-3 and Paragraph 0034 that [0034] Transformation is assigned to the nodes of the parametric graphs, wherein each node represents corner coordinates of the target wall elements. This transformation modifies all lines connected to the transformed node. However, to avoid distortion of the orthogonal nature of the plans, colinear paths (which connect to each other with a mutual node and share the same direction vector) are merged. Next, the array of nodes that are located on the collinear lines are identified. After applying transformations to the connected line node array, new polylines are constructed from each node array. This would result in a fully automated parametric model that takes transformation vectors and connected line indices as an input and outputs a new floorplan layout without producing undesired gaps and floorplan voids), and obtain a floor map image including the plurality of spaces (Randolph teaches at Paragraph 0082-0086 that the system 100 can find quadrilaterals (which inherently include corners/vertices) and shapes/spaces formed by connecting straight lines. This can result in a vectorized floor plan and the inspection application 112 can then add labels, such as room labels and dimensions to the vectorized floor plan. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Sfar teaches at Paragraph [0170] The next step may consist in constructing 3D primitives required by the 3D reconstruction API such as wall primitives, door primitives and window primitives. For instance, the wall primitive may be defined by the following attributes: coordinates of the two endpoints, thickness, height, references of the adjacent walls. Thanks to the refined mask, wall, window and door primitives may be easily built. Indeed, information such as coordinates of endpoints, reference of adjacent walls, reference of the wall to which a window (resp. door) belongs can be easily extracted from the refined mask. Other information such as wall/window/door height or width may be predefined or provided by a user. Bergin ‘115 teaches at Paragraph 0077 that such a generated floor plan 906 is generated and displayed based on the user input (i.e., sketch 902/bubble diagram 904) and the knowledge base that has been created and maintained. For example, as the user is drawing a sketch 902 that includes a line representing a wall, embodiments of the invention may automatically generate a bubble diagram 904 and a floorplan 906 that may include completed walls and details such as furniture. In other words, as a user draws a sketch 902, there is a real-time auto-completion of a bubble diagram 904/floorplan 906 (e.g., in a separate window). The knowledge base enables such an auto-completion based on a recognition that thin user drawn lines may represent an interior space and double-lines may represent windows. Bergin ‘115 teaches at Paragraph 0078 that FIG. 10 illustrates an implementation example of a real-time generation of a completed plan/layout 1006 from a bubble diagram 1004 in accordance with one or more embodiments of the invention. The left side 1004 illustrates a bubble diagram showing links between different bubbles representing different rooms (e.g., garage, dining, kitchen, living, bedroom, etc.). Bergin ‘923 shows at FIG. 2 the plurality of spaces formed by the walls with respect to 204A-204C of FIG. 2 based on the sketches 202A-202C by connecting the walls and corners of the walls to form room space models 204A-204C and at Paragraph 0044 that the walls 304A-304C are identified to correspond to the user drawn 302A-302C. . Bergin ‘923 teaches at FIGS. 2-3 and Paragraph 0034 that [0034] Transformation is assigned to the nodes of the parametric graphs, wherein each node represents corner coordinates of the target wall elements. This transformation modifies all lines connected to the transformed node. However, to avoid distortion of the orthogonal nature of the plans, colinear paths (which connect to each other with a mutual node and share the same direction vector) are merged. Next, the array of nodes that are located on the collinear lines are identified. After applying transformations to the connected line node array, new polylines are constructed from each node array. This would result in a fully automated parametric model that takes transformation vectors and connected line indices as an input and outputs a new floorplan layout without producing undesired gaps and floorplan voids). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated the floorplan generation of Pitzer/Sfar/Bergin ‘115/Bergin ‘923 into Randolph’s floorplan generation to have provided a CAD based floorplan generated based on the sketch drawing. One of the ordinary skill in the art would have drawn by the machine the straight lines based on the sketched lines and connected the walls representing the straight lines into a floorplan with multiple room spaces. Aoki teaches the claim limitation that: change a plurality of non-straight lines in a handwritten image to a plurality of straight lines (Aoki teaches at Section 3.1 that the line segment extraction process determines the location and thickness of line segments from the image data and the line code’s locations are approximated on the grid lines and at the crossing position, the line attributes are determined from the codes of the four directions around the crossing position…our system uses a smoothing process for same-code sequences that are short. Aoki teaches at Section 3.2 the contour of the background region can be traced in a unique route which is not influenced by crossings or junctions and at Section 3.3 that the floor-plan elements are identified based on the line segment and closed region information and at Section 4 that the system correctly identified 6037 of 6284 line elements). Aoki at least suggests identify, based on the plurality of straight lines, a first corner in the handwritten image (Aoki teaches at Section 3.1 that the line code’s locations are approximated on the grid lines. At the crossing position, the line attributes are determined from the codes of the four directions around the crossing position, since the crossing line acts as noise. In this way, most noises near line-crossings and junctions can be removed. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified close region elements), identify, based on the first comer, a plurality of spaces in the handwritten image (Aoki teaches at Section 3.2 that Res are represented by the shapes and the location of one or more closed regions….line segments have crossings and junctions. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified 2282 of 2442 closed region elements), and obtain a floor map image including the plurality of spaces (Aoki teaches at Section 3.3 that the Res are identified from their base shapes and at Figure 9 obtaining a converted image of the floor plan including the plurality of spaces/shapes such as the rectangular Tatami shape and closet shape). Shio teaches the claim limitation: identify, based on the plurality of straight lines, a first corner in the handwritten image (Shio teaches at Section 3.4 that the line elements are corrected and walls are defined using line attributes obtained in line feature extraction. Triangles making a folding door are identified as shown in FIG. 9 and a triangle inherently has a corner. Shio teaches at Section 3.2.2 that crossing attributes are set so that line segments are not discontinuous….crossing attributes vary depending on the presence and width of such segments and at Section 3.3 that a closet is composed of four triangles with one common vertex (corner) in the middle while a Japanese room is a certain combination of rectangles. Shio teaches at Section 3.3. that contours of closed areas confined by line segments are found through tracing white pixels adjoining the borders from inside and the inside is filled. Shio teaches at FIG. 9 obtaining a floor plan map with closet and Japanese room which include corners/vertices). identify, based on the first corner, a plurality of spaces in the handwritten image ( Shio teaches at FIG. 9 that a triangle of the folding door includes a corner and is identified. Shio teaches at Section 3.3. that contours of closed areas confined by line segments are found through tracing white pixels adjoining the borders from inside and the inside is filled. Shio teaches at FIG. 9 obtaining a floor plan map with closet and Japanese room which include corners/vertices), and obtain a floor map image including the plurality of spaces (Shio teaches at FIG. 9 obtaining a floor plan map with closet and Japanese room which include corners/vertices). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Shio’s identification of vertices within the architectural elements for the different shapes/elements including swinging elements into Randolph/Aoi’s generation of a floorplan image by converting the hand-sketched floor plan (Shio FIG. 1) as the floor plan input to a CAD software package to obtain corrected floor plan drawing output (FIG. 9) based on the shape recognition of the shapes in the input floor plan by matching the hand-sketched elements with the prerecorded basic patterns to have extracted squares, triangles, circles and other primitive shapes that make up architectural elements. One of the ordinary skill in the art would have extracted lines and areas/spaces of the floor plan. Heras teaches the claim limitation: identify, based on the plurality of straight lines, a first corner in the handwritten image ( Heras teaches at Section 4.1.1 that a raw vectorization of the image leads to encounter multiple corners and small unaligned segments for completely straight walls. This issue is solved by applying a morphological opening after closing the wall image, which allows to delete small noise and join unconnected pixels, and a logical AND with the original opened image to make borders straighter. Heras teaches at Section 4.1 that the wall is usually delimited by intersections with other walls. Heras teaches at FIG. 5 identifying N and L junctions (corners) based on a plurality of straight lines including the colored straight lines such as the green straight line, red straight line, blue straight line, black straight line. Heras teaches at Section 4.1.1 detecting linear elements including walls, a raw vectorization of the image leads to encounter multiple corners and small unaligned segments for completely straight walls. Heras teaches at Section 4.1.3 N-junctions for N > 2: The intersection of three or more different wall-segments at a certain point can be considered as the intersection of N different walls. Heras teaches at Section 4.1.2 that the attributes of the nodes are the thickness of the line segment and the geometrical coordinates of the end-points of the segment). identify, based on the first corner, a plurality of spaces in the handwritten image ( Heras teaches at Section 4 that rooms are detected by finding cycles in a plane graph of entities (including the end-points of the wall segments). Heras teaches at Section 4.1.2 that an attributed graph of line segments is created using the open-source graph library. In the attributed graph, the nodes are the segments obtained from the vectorization and the edges represent binary junctions among connected nodes and the edges contain two attributes: the coordinates of the junction point between the two segments and the relative angle between them. Heras teaches at Section 4.1 that nodes are wall entities which can be seen as groups of connected wall-segments, attributed with the geometric coordinates of their end-points and edges are connections among walls at these end-points and at Section 4.3 that closed regions are found from the final graph using the optimal algorithm. Heras teaches at Section 3.2 that in order to be able to extract the rooms of a floor plan, we focus on the detection of the basic structural elements including walls, door and windows. Heras shows at FIG. 5 that the N-junctions, L-junctions and the wall segments form a plurality of room spaces and at Section 5 that an example illustrating the rooms detected in a BlackSet image is shown in FIG. 10b and each one of the isolated regions corresponds to a detected room. Heras teaches at Section 4.1 that the wall is usually delimited by intersections with other walls. Heras teaches at FIG. 5 identifying N and L junctions (corners) based on a plurality of straight lines including the colored straight lines such as the green straight line, red straight line, blue straight line, black straight line. Heras teaches at Section 4.1.1 detecting linear elements including walls, a raw vectorization of the image leads to encounter multiple corners and small unaligned segments for completely straight walls), and obtain a floor map image including the plurality of spaces ( Heras teaches at FIG. 9 obtaining a floor map based on the rooms detected in the Blackset. Heras teaches at Section 3.2 that in order to be able to extract the rooms of a floor plan, we focus on the detection of the basic structural elements including walls, door and windows. Heras shows at FIG. 5 that the N-junctions, L-junctions and the wall segments form a plurality of room spaces and at Section 5 that an example illustrating the rooms detected in a BlackSet image is shown in FIG. 10b and each one of the isolated regions corresponds to a detected room. Heras teaches at Section 2 that the authors extract the structural polygons of each floor stage). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Heras’s identification of junctions within the architectural elements for the different shapes/elements including wall elements into Randolph/Aoi’s generation of a floorplan image by identifying the walls and wall-end-points (corners). One of the ordinary skill in the art would have extracted wall segments and rooms of the floor plan based on a graph with walls representing the nodes and the edges representing the coordinates of the junction point between the two segments and the relative angle between them and thereby the floor plan is created based on the graph structure having the nodes and edges. Vargas teaches the claim limitation: identify, based on the plurality of straight lines, a first corner in the handwritten image ( Vargas teaches at Section 2.2 that wall extraction methods are concerned with separating the walls (as thick lines) from the rest of the content in an architectural floor plan. Vargas teaches at FIG. 2.5 and Section 2.1.4 that CNN method is used to detect the junctions in a floor plan and then used the junctions for reconstructing the original wall structure based on semantic rules….we have to deal with a notable degree of content overlapping which also produces corners and cross-points and FIG. 4.9 that a virtual wall created inside a T wall intersection). identify, based on the first corner, a plurality of spaces in the handwritten image ( Vargas teaches at FIG. 2.5 and Section 2.1.4 that CNN method is used to detect the junctions in a floor plan and then used the junctions for reconstructing the original wall structure based on semantic rules….we have to deal with a notable degree of content overlapping which also produces corners and cross-points and FIG. 4.9 that a virtual wall created inside a T wall intersection. Vargas teaches at FIG. 2.5 connecting junctions into a set of primitives. Vargas teaches at Section 2.2 that room detection methods are concerned with detecting and segmenting the different room regions in the floor plan image and rooms are detected using geometrical methods on the vectorized wall structure or by detecting closed loops in the wall structure and their output can be a vectorized polygonal region that matches each room’s boundary), and obtain a floor map image including the plurality of spaces ( Vargas teaches at FIG. 2.5 and Section 2.1.4 that CNN method is used to detect the junctions in a floor plan and then used the junctions for reconstructing the original wall structure based on semantic rules….we have to deal with a notable degree of content overlapping which also produces corners and cross-points and FIG. 4.9 that a virtual wall created inside a T wall intersection. Vargas teaches at FIG. 2.5 connecting junctions into a set of primitives. Vargas teaches at Section 2.2 that room detection methods are concerned with detecting and segmenting the different room regions in the floor plan image and rooms are detected using geometrical methods on the vectorized wall structure or by detecting closed loops in the wall structure and their output can be a vectorized polygonal region that matches each room’s boundary). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Vargas’s identification of junctions within the architectural elements for the different shapes/elements including wall elements into Randolph/Aoi’s generation of a floorplan image by identifying the walls and wall-end-points (corners). One of the ordinary skill in the art would have extracted wall segments and rooms of the floor plan based on the identified junctions to have reconstructed the output floor plan. Feltes teaches the claim limitation: identify, based on the plurality of straight lines, a first corner in the handwritten image (Feltes teaches at Section 3 that corners for wall image in architectural floor plan are detected and at FIG. 4 that corner points are detected and at Section 4 that wall edges are constructed using detected corner points and at FIG. 5 extracting corners using the proposed method and gaps are closed by connecting the pairs of previously detected edges). identify, based on the first corner, a plurality of spaces in the handwritten image ( Feltes teaches at Section 4 that gap closing is a process in architectural floor plan analysis which is performed to find the closed regions of rooms in the floor plan and at FIG. 5 generating the floor plan image), and obtain a floor map image including the plurality of spaces (Feltes teaches at Section 4 that gap closing is a process in architectural floor plan analysis which is performed to find the closed regions of rooms in the floor plan and at FIG. 5 generating the floor plan image). Sfar, Feltes, Heras, Pitzer, Randolph, Aoki and Shio further teach the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on an intersection being present between a first straight line and a second straight line, among the plurality of straight lines, identify the first corner with respect to the intersection of the first straight line and the second straight line, and based on an intersection being not present between a third straight line and a fourth straight line, among the plurality of straight lines, determine whether the third straight line and the fourth straight line form the first corner based on a distance between the third straight line and the fourth straight line and an angle formed by the third straight line and the fourth straight line ( Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Sfar teaches at Paragraph [0170] The next step may consist in constructing 3D primitives required by the 3D reconstruction API such as wall primitives, door primitives and window primitives. For instance, the wall primitive may be defined by the following attributes: coordinates of the two endpoints, thickness, height, references of the adjacent walls. Thanks to the refined mask, wall, window and door primitives may be easily built. Indeed, information such as coordinates of endpoints, reference of adjacent walls, reference of the wall to which a window (resp. door) belongs can be easily extracted from the refined mask. Other information such as wall/window/door height or width may be predefined or provided by a user. Pitzer teaches at Paragraph 0031 that for wall intersections that do not intersect at 90 degree angles, the mobile electronic device generates a user interface that enables the operator to measure a precise wall angle or to enter the angle manually. Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Feltes teaches at Section 3 that each wall edge is processed for closing the gaps in the probable locations of doors and windows and detected corner points are used for extraction of parallel edges and at Section 4 that gaps are closed by connecting pairs of previously detected edges and the angles of the rectangle created by connecting the two edges should be 90 degrees and the area between the two edge candidates should be empty. This ensures that the two edges will not be connected if they are separated by another wall. Feltes thus teaches that the first two edges intersect at a second corner point while the second two edges are not intersect with each other as there is no angle formed by the two edges. Feltes also teaches at FIG. 5 a third corner is detected between the third line and fourth line and the third line and fourth line forms 90 degree angle. Shio teaches at Section 3.1 that an element may be converted into multiple line segments combined with a linear array of feature points such as crossing points, branching points or bending points and a closet is composed of four triangles with one common vertex in the middle and contours of closed areas are found through tracing. Shio thus shows that a closet is found with an intersection between a first contour/line and a second contour/line at the center of the closet. Randolph teaches at Paragraph 0082-0086 that the system 100 can find quadrilaterals (which inherently include corners/vertices) and shapes/spaces formed by connecting straight lines. This can result in a vectorized floor plan and the inspection application 112 can then add labels, such as room labels and dimensions to the vectorized floor plan. Aoki teaches at Section 3 that a closet is composed of four triangles. Aoki teaches at Section 3.1 that the line code’s locations are approximated on the grid lines. At the crossing position, the line attributes are determined from the codes of the four directions around the crossing position, since the crossing line acts as noise. In this way, most noises near line-crossings and junctions can be removed. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified close region elements. Aoki teaches at Section 3.2 that Res are represented by the shapes and the location of one or more closed regions….line segments have crossings and junctions. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified 2282 of 2442 closed region elements. Aoki teaches at Section 3.3 that the Res are identified from their base shapes and at Figure 9 obtaining a converted image of the floor plan including the plurality of spaces/shapes such as the rectangular Tatami shape and closet shape). Feltes, Heras, Sfar, Pitzer, Randolph, Aoki and Shio further teach the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on the distance between the third straight line and the fourth straight line being less than a first threshold distance and the angle formed by the third straight line and the fourth straight line being greater than a threshold angle, determine that the third straight line and the fourth straight line form the first corner, and identify the first corner of the third straight line and the fourth straight line by extending one of the third straight line and the fourth straight line ( Feltes teaches determining the separation distance between the two edges to determine whether the two wall edges intersect (forming 90 degree angle) or not intersect (forming parallel edges or being wall segments of different rooms). The determination of the intersection or no intersection is based on the separation distance being smaller than a distance threshold and the angle being larger than an angle threshold. Feltes teaches at FIG. 5 and Section 4 that the angles of the rectangle created by connecting the two edges should be approximately 90 degree and the two edges are aligned and the area between the two edges should be empty and the two edges will not be connected if they are separated by another wall (meaning that the angle of the two intersecting edges which are determined to be connected is greater than an angle threshold and the distance between the two parallel edges which are determined to be not connected is larger than a distance threshold, which implicitly means that the distance between two intersecting edges is smaller than a distance threshold ). Feltes further teaches resolving errors in corner point detection and gaps are closed by connecting previously detected edges. Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Sfar teaches at Paragraph 0156 that the first and/or second predetermined collinearity threshold(s) may be defined as a threshold on the (non-oriented) angle between two line segments. Said threshold may be defined as a function of the distribution of all angles formed by two contiguous walls in the training dataset. The collinearity threshold may be defined thanks to this distribution. For example, the value of the angle such that less than 5% of the angles formed by two contiguous walls are lower than this value. If said value is high (e.g. higher than 45°), it can be lowered to 30′. In practice, a value of the order of 30′ provides good results. Sfar teaches at Paragraph [0157] In examples, the predetermined distance threshold may be defined as a function of other object instances such as windows or doors. It may be fixed to lower than 30% of the average width of a door or window, so as to allow obtaining few false positives. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Heras teaches at Section 4.1.1 that a raw vectorization of the image leads to encounter multiple corners and small unaligned segments for completely straight walls. This issue is solved by applying a morphological opening after closing the wall image, which allows to delete small noise and join unconnected pixels, and a logical AND with the original opened image to make borders straighter. Heras teaches at Section 4.1 that the wall is usually delimited by intersections with other walls. Heras teaches at FIG. 5 identifying N and L junctions (corners) based on a plurality of straight lines including the colored straight lines such as the green straight line, red straight line, blue straight line, black straight line. Heras teaches at Section 4.1.1 detecting linear elements including walls, a raw vectorization of the image leads to encounter multiple corners and small unaligned segments for completely straight walls. Heras teaches at FIG. 5 that based on the distance between the green line and the black line being less than a threshold distance and the angle formed being 90 degree, determine that the green line and the black line forms an L-junction and identifying the L-junction by extending one of the green line and the black line. Feltes teaches at Section 3 that each wall edge is processed for closing the gaps in the probable locations of doors and windows and detected corner points are used for extraction of parallel edges and at Section 4 that gaps are closed by connecting pairs of previously detected edges and the angles of the rectangle created by connecting the two edges should be 90 degrees and the area between the two edge candidates should be empty. This ensures that the two edges will not be connected if they are separated by another wall. Feltes thus teaches that the first two edges intersect at a second corner point while the second two edges are not intersect with each other as there is no angle formed by the two edges. Feltes also teaches at FIG. 5 a third corner is detected between the third line and fourth line and the third line and fourth line forms 90 degree angle. Shio teaches at Section 3.1 that an element may be converted into multiple line segments combined with a linear array of feature points such as crossing points, branching points or bending points and a closet is composed of four triangles with one common vertex in the middle and contours of closed areas are found through tracing. Shio thus shows that a closet is found with an intersection between a first contour/line and a second contour/line at the center of the closet. Randolph teaches at Paragraph 0082-0086 that the system 100 can find quadrilaterals (which inherently include corners/vertices) and shapes/spaces formed by connecting straight lines. This can result in a vectorized floor plan and the inspection application 112 can then add labels, such as room labels and dimensions to the vectorized floor plan. Aoki teaches at Section 3 that a closet is composed of four triangles. Aoki teaches at Section 3.1 that the line code’s locations are approximated on the grid lines. At the crossing position, the line attributes are determined from the codes of the four directions around the crossing position, since the crossing line acts as noise. In this way, most noises near line-crossings and junctions can be removed. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified close region elements. Aoki teaches at Section 3.2 that Res are represented by the shapes and the location of one or more closed regions….line segments have crossings and junctions. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified 2282 of 2442 closed region elements. Aoki teaches at Section 3.3 that the Res are identified from their base shapes and at Figure 9 obtaining a converted image of the floor plan including the plurality of spaces/shapes such as the rectangular Tatami shape and closet shape). Sfar teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on the distance between the third straight line and the fourth straight line being less than a first threshold distance and the angle formed by the third straight line and the fourth straight line being greater than a threshold angle, determine that the third straight line and the fourth straight line form the first corner, and identify the first corner of the third straight line and the fourth straight line by extending one of the third straight line and the fourth straight line ( Sfar teaches that if the two line segments are not colinear, the angle formed by the two line segments is larger than a threshold as shown at Paragraph 0156-0167 and based on the corner intersection between the two line segments, determining whether the two line segments form a junction based on the distance below a predefined threshold and the angle larger than a threshold (because of non-linearity). Sfar teaches based on the intersection between the two line segments being not present, determining whether the two line segments form a third corner (determining collinearity when the two line segments not forming a third corner and determining a corner formed) based on the distance between the two line segments below a predefined threshold and angle formed by the two line segments by detecting noncollinearity where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity). Sfar teaches at Paragraph [0167] For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. Sfar teaches at Paragraph 0156 that the first and/or second predetermined collinearity threshold(s) may be defined as a threshold on the (non-oriented) angle between two line segments. Said threshold may be defined as a function of the distribution of all angles formed by two contiguous walls in the training dataset. The collinearity threshold may be defined thanks to this distribution. For example, the value of the angle such that less than 5% of the angles formed by two contiguous walls are lower than this value. If said value is high (e.g. higher than 45°), it can be lowered to 30′. In practice, a value of the order of 30′ provides good results. Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Sfar teaches at Paragraph [0170] The next step may consist in constructing 3D primitives required by the 3D reconstruction API such as wall primitives, door primitives and window primitives. For instance, the wall primitive may be defined by the following attributes: coordinates of the two endpoints, thickness, height, references of the adjacent walls. Thanks to the refined mask, wall, window and door primitives may be easily built. Indeed, information such as coordinates of endpoints, reference of adjacent walls, reference of the wall to which a window (resp. door) belongs can be easily extracted from the refined mask. Other information such as wall/window/door height or width may be predefined or provided by a user). Re Claim 2: The claim 2 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor is further configured to execute the at least one instruction to change, based on a starting point and an end point of each of the plurality of non-straight lines in the handwritten image, the plurality of non-straight lines to the plurality of straight lines. Randolph/Pitzer and Aoki further teach the claim limitation that the at least one processor is further configured to execute the at least one instruction to change, based on a starting point and an end point of each of the plurality of non-straight lines in the handwritten image, the plurality of non-straight lines to the plurality of straight lines ( Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Randolph teaches at Paragraph 0082-0084 that the system 100 can straighten the lines. Aoki teaches at Section 3.1 that the line segment extraction process determines the location and thickness of line segments from the image data and the line code’s locations are approximated on the grid lines and at the crossing position, the line attributes are determined from the codes of the four directions around the crossing position…our system uses a smoothing process for same-code sequences that are short. Aoki teaches at Section 3.2 the contour of the background region can be traced in a unique route which is not influenced by crossings or junctions and at Section 3.3 that the floor-plan elements are identified based on the line segment and closed region information and at Section 4 that the system correctly identified 6037 of 6284 line elements). Re Claim 5: The claim 5 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor is further configured to execute the at least one instruction to identify the plurality of spaces in a shape of a polygon based on a smallest closed loop with the identified first corner as a vertex. Pitzer, Heras, Randolph, Aoki, Feltes and Shio further teach the claim limitation that the at least one processor is further configured to execute the at least one instruction to identify the plurality of spaces in a shape of a polygon based on a smallest closed loop with the identified first corner as a vertex ( Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Heras teaches at Section 2 that the window is formed by a small closed loop and a room is formed by even bigger loop. Feltes teaches at Section 3 that each wall edge is processed for closing the gaps in the probable locations of doors and windows and detected corner points are used for extraction of parallel edges and at Section 4 that gaps are closed by connecting pairs of previously detected edges and the angles of the rectangle created by connecting the two edges should be 90 degrees and the area between the two edge candidates should be empty. This ensures that the two edges will not be connected if they are separated by another wall. Feltes thus teaches that the first two edges intersect at a second corner point while the second two edges are not intersect with each other as there is no angle formed by the two edges. Feltes also teaches at FIG. 5 a third corner is detected between the third line and fourth line and the third line and fourth line forms 90 degree angle. Shio teaches at Section 2.2 that borders are first drawn and then the inside space is filled or hatched and at Section 3 that the shape of closed areas confined by line segments is found through pattern matching and at Section 3 that a Japanese room is a certain combination of rectangles. Shio teaches at Section 3.3 that prior to area extraction, preprocessing is performed to fill line breaks of 2 mm or less. Shio teaches that the basic patterns are squares, triangles and other primitives shapes that make up architectural elements as shown in FIG. 8. Shio teaches at Section 3.1 that an element may be converted into multiple line segments combined with a linear array of feature points such as crossing points, branching points or bending points and a closet is composed of four triangles with one common vertex in the middle and contours of closed areas are found through tracing. Shio thus shows that a closet is found with an intersection between a first contour/line and a second contour/line at the center of the closet. Randolph teaches at Paragraph 0082-0086 that the system 100 can find quadrilaterals (which inherently include corners/vertices) and shapes/spaces formed by connecting straight lines. This can result in a vectorized floor plan and the inspection application 112 can then add labels, such as room labels and dimensions to the vectorized floor plan. Aoki teaches at Section 3 that a closet is composed of four triangles. Aoki teaches at Section 3.1 that the line code’s locations are approximated on the grid lines. At the crossing position, the line attributes are determined from the codes of the four directions around the crossing position, since the crossing line acts as noise. In this way, most noises near line-crossings and junctions can be removed. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified close region elements), identify, based on the first comer, a plurality of spaces in the handwritten image ( Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Aoki teaches at Section 3.2 that Res are represented by the shapes and the location of one or more closed regions….line segments have crossings and junctions. Aoki teaches at Section 3.3 to identify LEs, line segments are mainly used and the LEs are identified from the line segment’s attributes. Aoki teaches at Section 3.3 that the base shape of a closet is a triangle and that of a tatami region is a rectangle. The rectangle shape inherently includes corners and at Section 4 that the system correctly identified 2282 of 2442 closed region elements), and obtain a floor map image including the plurality of spaces ( Pitzer teaches at FIG. 4 and Paragraph 0028 that the sketched floor plan includes an intersection between two strokes corresponding to a corner between two or more walls. Pitzer teaches at FIG. 6 and Paragraph 0034 a precise floor plan 604 corresponding to the approximate floor plan 404 and the walls in the floor plan intersect each other at perpendicular angles and measured angles between walls that do not meet at perpendicular angles. Pitzer teaches at Paragraph 0004 that different rooms in a building are separated by walls and at FIGS. 10A-10B and Paragraph 0039 that the process 200 optionally continues for multiple rooms in a building. Pitzer teaches at Paragraph 0039 that the mobile electronic device 104 identifies corresponding corners and structural features of the multiple floor plans and arranges the floor plans for the rooms automatically using, for example, a registration process that fits floor plans of different rooms together based on common wall edges between adjacent rooms. FIG. 10A depicts display 1000 including multiple room floor plans in a larger building, and FIG. 10B depicts another display 1050 including three-dimensional models of multiple rooms in a building. Aoki teaches at Section 3.3 that the Res are identified from their base shapes and at Figure 9 obtaining a converted image of the floor plan including the plurality of spaces/shapes such as the rectangular Tatami shape and closet shape). Re Claim 6: The claim 6 encompasses the same scope of invention as that of the claim 5 except additional claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on a determination that a distance between a first edge forming a first space among the plurality of spaces and a second edge forming a second space adjacent to the first space is less than a second threshold distance, identify that the first edge and the second edge are overlapped, and identify, with respect to a midpoint of a long edge between the first edge and the second edge, the plurality of spaces by changing a starting point and an end point of the long edge. Sfar, Shio further teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: based on a determination that a distance between a first edge forming a first space among the plurality of spaces and a second edge forming a second space adjacent to the first space is less than a second threshold distance, identify that the first edge and the second edge are overlapped, and identify, with respect to a midpoint of a long edge between the first edge and the second edge, the plurality of spaces by changing a starting point and an end point of the long edge ( Sfar teaches at Paragraph [0166] 4. Line joining; This step is only applied on the wall-specific mask. The processed mask returned by step 3 comprises a set of line segments corresponding to straight walls. This step consists in detecting where there is a junction between two straight walls and modifying the corresponding line segments by joining their extremity. The developed algorithm is as follows: [0167] a. For every pair of line segments, if they are not collinear and the distance between the two segments is below a predefined threshold, the two segments are modified such that one of their endpoints overlaps with the point corresponding to the intersection between the two lines containing the two segments. [0168] b. While segment pairs have been modified in the previous a. step, return to a. step. Otherwise, return the final set of line segments. Shio teaches at Section 3.3 that prior to area extraction, preprocessing is performed to fill line breaks of 2 mm or less. Shio teaches at Section 2.2 that borders are first drawn and then the inside space is filled or hatched. Shio teaches at FIG. 1, FIG. 9 and Table 1 obtaining information about an object from the handwritten image and insert the information about the object to the plurality of spaces. Shio teaches at Section 2.2 that borders are first drawn and then the inside space is filled or hatched and at Section 3 that the shape of closed areas confined by line segments is found through pattern matching and at Section 3 that a Japanese room is a certain combination of rectangles. Shio teaches at Section 3.3 that prior to area extraction, preprocessing is performed to fill line breaks of 2 mm or less. Shio teaches that the basic patterns are squares, triangles and other primitives shapes that make up architectural elements as shown in FIG. 8. ). Re Claim 7: The claim 7 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor is further configured to execute the at least one instruction to: obtain information about an object from the handwritten image, and insert the information about the object to the plurality of spaces. Bergin ‘115/Shio further teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: obtain information about an object from the handwritten image, and insert the information about the object to the plurality of spaces ( Bergin ‘115 teaches at Paragraph 0056 that FIG. 5B illustrates the high level encoding of that plan that produces a labeled representation of the plan including stairs, restrooms, elevator+lobby, and the double loaded corridor elements. Similarly, FIG. 5C illustrates the same plan while FIG. 5D illustrates the low level encoding conducted during the recognition process in which the additional elements of the mech. stack, elevator, exit door, and window elements have been identified and labeled accordingly. Shio teaches at Section 3.3 that prior to area extraction, preprocessing is performed to fill line breaks of 2 mm or less. Shio teaches at Section 2.2 that borders are first drawn and then the inside space is filled or hatched. Shio teaches at FIG. 1, FIG. 9 and Table 1 obtaining information about an object from the handwritten image and insert the information about the object to the plurality of spaces. Shio teaches at Section 2.2 that borders are first drawn and then the inside space is filled or hatched and at Section 3 that the shape of closed areas confined by line segments is found through pattern matching and at Section 3 that a Japanese room is a certain combination of rectangles. Shio teaches at Section 3.3 that prior to area extraction, preprocessing is performed to fill line breaks of 2 mm or less. Shio teaches that the basic patterns are squares, triangles and other primitives shapes that make up architectural elements as shown in FIG. 8). Re Claim 11: The claim 11 recites a method of controlling an electronic device, the method comprising: changing a plurality of non-straight lines in a handwritten image to a plurality of straight lines; identifying, based on the plurality of straight lines, a first corner in the handwritten image; identifying, based on the first comer, a plurality of spaces in the handwritten image; and obtaining a floor map image including the plurality of spaces. The claim 11 is in parallel with the claim 1 in a method form. The claim 11 is subject to the same rationale of rejection as the claim 1. Re Claim 12: The claim 12 encompasses the same scope of invention as that of the claim 11 except additional claim limitation that the changing the plurality of non-straight lines comprises, changing, based on a starting point and an end point of each of the plurality of non- straight lines in the handwritten image, the plurality of non-straight lines to the plurality of straight lines. The claim 12 is in parallel with the claim 2 in a method form. The claim 12 is subject to the same rationale of rejection as the claim 2. The claim 14 is in parallel with the claim 4 in a method form. The claim 14 is subject to the same rationale of rejection as the claim 4. Re Claim 15: The claim 15 encompasses the same scope of invention as that of the claim 11 except additional claim limitation that the identifying the plurality of spaces comprises identifying the plurality of spaces in a shape of a polygon based on a smallest closed loop with the identified first corner as a vertex. The claim 15 is in parallel with the claim 5 in a method form. The claim 15 is subject to the same rationale of rejection as the claim 5. Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Randolph US-PGPUB No. 2017/0091885 (hereinafter Randolph) in view of Pitzer et al. US-PGPUB No. 2014/0267717 (hereinafter Pitzer); Bergin et al. US-PGPUB No. 2020/0151923 (hereinafter Bergin ‘923); Bergin et al. US-PGPUB No. 2019/0228115 (hereinafter Bergin ‘115); R. Sfar et al. US-PGPUB No. 2019/0243928 (hereinafter Sfar); Y. Aoki, et al., “A Prototype System for Interpreting Hand-Sketched Floor Plans”, Proc. Of 13th International Conf. on Pattern Recognition, August 25-29, 1996, Vienna, Austria, pp. 747-751 (hereinafter Aoki); Akio Shio et al., “Sketch Plan: A Prototype System for Interpreting Hand-Sketched Floor Plans”, Systems and Computers in Japan, Vol. 31, No. 6, 2000, pp. 10-18 (hereinafter Shio); L. Heras, et. al., “Statistical Segmentation and Structural Recognition for Floor Plan Interpretation”, Springer, Dec. 3, 2013, pp. 221-237 (hereinafter Heras); D. Vargas, “Wall Extraction and Room Detection for Multi-Unit Architectural Floor Plans”, Master of Science Thesis, University of Cauca, Colombia, 2015, pp. 1-98 (hereinafter Vargas); Max Feltes, et al., “Improved Contour-Based Corner Detection for Architectural Floor Plans”, 10th International Workshop, GREC 2013, Bethlehem, PA, USA, August 20-21, 2013, pp. 191-203 (hereinafter Feltes); S. Ahmed, et al., “Automatic Room Detection and Room Labeling from Architectural Floor Plans”, 2012 10th IAPR International Workship on Document Analysis Systems, March 27-29, 2012, pp. 339-343 (hereinafter Ahmed). Re Claim 8: The claim 8 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor is further configured to execute the at least one instruction to: recognize a text in the handwritten image, and based on the recognized text, obtain information about the plurality of spaces. Bergin ‘115/Heras implicitly teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: recognize a text in the handwritten image, and based on the recognized text, obtain information about the plurality of spaces ( Bergin ‘115 teaches at Paragraph 0056 that FIG. 5B illustrates the high level encoding of that plan that produces a labeled representation of the plan including stairs, restrooms, elevator+lobby, and the double loaded corridor elements. Similarly, FIG. 5C illustrates the same plan while FIG. 5D illustrates the low level encoding conducted during the recognition process in which the additional elements of the mech. stack, elevator, exit door, and window elements have been identified and labeled accordingly. Heras teaches at Section 2 separating graphics from text and vectorizing the graphical layer and the input is an already vectorized plan with vectors, arcs and text that is preprocessed to obtain special symbols such as doors). S. Ahmed, et al., “Automatic Room Detection and Room Labeling from Architectural Floor Plans”, 2012 10th IAPR International Workship on Document Analysis Systems, March 27-29, 2012, pp. 339-343 (hereinafter Ahmed) teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: recognize a text in the handwritten image, and based on the recognized text, obtain information about the plurality of spaces (Ahmed teaches at Section III.A that text/graphics segmentation is performed and FIG. 2 shows the text image extracted by the text/graphics segmentation process and Section III.C that after assigning label to rooms, novel post-processing is performed). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have obtained label information about the room based on the recognized text. One of the ordinary skill in the art would have been motivated to have performed room labeling over the floor plane image by performing the text/graphics segmentation to have extracted labels in the input floor plan. Re Claim 9: The claim 9 encompasses the same scope of invention as that of the claim 8 except additional claim limitation that the at least one processor is further configured to execute the at least one instruction to obtain the floor map image by disposing an object in the plurality of spaces based on the information about the plurality of spaces. Bergin ‘115/Shio further teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to obtain the floor map image by disposing an object in the plurality of spaces based on the information about the plurality of spaces ( Bergin ‘115 teaches at Paragraph 0056 that FIG. 5B illustrates the high level encoding of that plan that produces a labeled representation of the plan including stairs, restrooms, elevator+lobby, and the double loaded corridor elements. Similarly, FIG. 5C illustrates the same plan while FIG. 5D illustrates the low level encoding conducted during the recognition process in which the additional elements of the mech. stack, elevator, exit door, and window elements have been identified and labeled accordingly. Shio teaches at Section 3.3 that prior to area extraction, preprocessing is performed to fill line breaks of 2 mm or less. Shio teaches at Section 2.2 that borders are first drawn and then the inside space is filled or hatched. Shio teaches at FIG. 1, FIG. 9 and Table 1 obtaining information about an object from the handwritten image and insert the information about the object to the plurality of spaces. Shio teaches at Section 2.2 that borders are first drawn and then the inside space is filled or hatched and at Section 3 that the shape of closed areas confined by line segments is found through pattern matching and at Section 3 that a Japanese room is a certain combination of rectangles. Shio teaches at Section 3.3 that prior to area extraction, preprocessing is performed to fill line breaks of 2 mm or less. Shio teaches that the basic patterns are squares, triangles and other primitives shapes that make up architectural elements as shown in FIG. 8). Re Claim 10: The claim 10 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor is further configured to execute the at least one instruction to: recognize at least one of a number and a text in the handwritten image, and change the plurality of spaces according to information corresponding to sizes of the plurality of spaces that are obtained based on the at least one of the number and the text. Bergin ‘115/Heras teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: recognize at least one of a number and a text in the handwritten image, and change the plurality of spaces according to information corresponding to sizes of the plurality of spaces that are obtained based on the at least one of the number and the text (Bergin ‘115 teaches at Paragraph 0056 that FIG. 5B illustrates the high level encoding of that plan that produces a labeled representation of the plan including stairs, restrooms, elevator+lobby, and the double loaded corridor elements. Similarly, FIG. 5C illustrates the same plan while FIG. 5D illustrates the low level encoding conducted during the recognition process in which the additional elements of the mech. stack, elevator, exit door, and window elements have been identified and labeled accordingly. Heras teaches at Section 2 separating graphics from text and vectorizing the graphical layer and the input is an already vectorized plan with vectors, arcs and text that is preprocessed to obtain special symbols such as doors). S. Ahmed, et al., “Automatic Room Detection and Room Labeling from Architectural Floor Plans”, 2012 10th IAPR International Workship on Document Analysis Systems, March 27-29, 2012, pp. 339-343 (hereinafter Ahmed) teaches the claim limitation that the at least one processor is further configured to execute the at least one instruction to: recognize at least one of a number and a text in the handwritten image, and change the plurality of spaces according to information corresponding to sizes of the plurality of spaces that are obtained based on the at least one of the number and the text (Ahmed teaches at FIGS. 2-3 that the labels include square footage numbers and at Section III.A that text/graphics segmentation is performed and FIG. 2 shows the text image extracted by the text/graphics segmentation process and Section III.C that after assigning label to rooms, novel post-processing is performed). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have obtained label information about the room based on the recognized text. One of the ordinary skill in the art would have been motivated to have performed room labeling over the floor plane image by performing the text/graphics segmentation to have extracted labels in the input floor plan. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIN CHENG WANG whose telephone number is (571)272-7665. The examiner can normally be reached Mon-Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JIN CHENG WANG/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Jan 05, 2024
Application Filed
Aug 30, 2025
Non-Final Rejection — §103
Oct 10, 2025
Interview Requested
Oct 28, 2025
Examiner Interview Summary
Oct 28, 2025
Applicant Interview (Telephonic)
Dec 03, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594883
DISPLAY DEVICE FOR DISPLAYING PATHS OF A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12597086
Tile Region Protection in a Graphics Processing System
2y 5m to grant Granted Apr 07, 2026
Patent 12592012
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM FOR COLLAGE MAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586270
GENERATING AND MODIFYING DIGITAL IMAGES USING A JOINT FEATURE STYLE LATENT SPACE OF A GENERATIVE NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12579709
IMAGE SPECIAL EFFECT PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
69%
With Interview (+10.3%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 832 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month