Prosecution Insights
Last updated: April 19, 2026
Application No. 18/517,225

Computer Vision Systems and Methods for Information Extraction from Floorplan Images

Non-Final OA §103
Filed
Nov 22, 2023
Examiner
COBB, MICHAEL J
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Insurance Services Office Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
329 granted / 432 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+37.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
19 currently pending
Career history
451
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
34.7%
-5.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20 are currently pending in the present application, with claims 1 and 11 being independent. Information Disclosure Statement The information disclosure statement (IDS) submitted on 20 March 2024 has been considered by the examiner. Duplicate Claim Claims 4 and 5 and claims 14 and 15 are respective duplicates of each other. Applicant is advised that should claim 4/14 be found allowable, claim 5/15 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Objections Claim(s) 1 and 11 is/are objected to because of the following informalities: Claim 1 appears as though it needs a word between the processor and each of the subsequent limitations, such as reciting “the processor configured to: retrieve Claims 1 and 11 should recite “performing named entity recognition” Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6, 8-13, 16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmed et al. (“Automatic Analysis and Sketch-Based Retrieval of Architectural Floor Plans”, 2014) in view of Stockton (“Building Designs by Stockton”, 2021) in further view of Wessel et al. (“The Room Connectivity Graph: Shape Retrieval in the Architectural Domain”, 2008). Regarding claim 1, Ahmed teaches a system for architectural floorplan image analytics (see for instance, abstract), comprising: a database for storing a floorplan image (see for instance, page 98, section 4.1. Floorplan Analysis Evaluation”, page 99, section 5. “Conclusion and future work”, paragraph 2, and fig. 1.); and a processor in communication with the database, the processor (The system is implemented on a computer, and while not explicitly stated by Ahmed, it would have been obvious to a person of ordinary skill in the art on the effective filing date of the invention, that a computer would comprise a processor in communication with a database and the processor would perform various functions): retrieving the floorplan image from the database (see for instance, page 98, section 4.1. Floorplan Analysis Evaluation and fig. 1. “Our system is evaluated using a data set containing original floor plan images. This data set was introduced by Macé et al. (2010) and contains the floor plan images”, see for instance, page 98, section 4.1. Floorplan Analysis Evaluation, paragraph 1); applying object detection to the floorplan image to identify one or more floors of the floorplan image; applying segmentation to the one or more floors of the floorplan image to identify one or more entities of the one or more floors of the floorplan image (see for instance, figs. 1, 2, and 5. “Floor plans contain information that collectively help an architect to express the actual dynamics of the building...One of the key points of the proposed method is its fine segmentation of different types of information available in floor plans, e.g., walls, symbols, text, etc”, see for instance, page 94, section 3.1.1. Information Segmentation”, paragraph 1. “To detect the actual bounds of rooms, the image with the closed gaps is inverted and connected component analysis is performed on it. All of the very small connected components are removed, whereas each of the remaining connected component is referred as room. The detected rooms can be found in Fig. 2b.3.”, see for instance, page 95, section 3.1.3. “Semantic Analysis”, paragraph 3 and fig. 2); extracting text from the floorplan image (see for instance, fig. 1 and page 94, section 3.1.1. “Information Segmentation”. First, text/graphics segmentation is performed using the methods presented by Ahmed et al. (2011b). This method is based on method by Tombre et al. (2002) with number of improvements specifically floor the floor plans. Text/graphics segmentation separate the text from the graphics in the floor plan image”, see for instance, page 94, section 3.1.1. “Information Segmentation”, paragraph 2); preforming named entity recognition on the extracted text (“After detection of rooms the next step is to define there functions like WC, Living room, etc. In order to find the function of each room, the text layer from the information segmentation as well as the connected component of the room is used. In particular, all text components which lie in the boundary of a room are taken into account. After extraction of the room text, horizontal and vertical smearing is performed on the extracted text to merge the neighboring characters, resulting in the bounds for words. Using the bounding boxes all the words are rotated to a horizontal direction and OCR is performed on them”, see for instance, page 95, section 3.1.3 Semantic Analysis, paragraph 3); associating the one or more identified entities with one or more corresponding recognized entities (“The OCR result is then compared to rooms title dictionary and the closest title according to the Levenshtein distance is assigned to the room”, see for instance, page 95, section 3.1.3 Semantic Analysis, paragraph 3); generating one or more nodes corresponding to each recognized entity (“The extracted semantics are represented as a graph G = (V,E). The vertices V have a type Tvertex reflecting a level, unit, zone, or room. The edges E also have different types Tedge indicating if the vertices are connected directly or are just adjacent, both of these relations are symmetric”, see for instance, page 96, section 3.3. “Graph Structure”, paragraph 1); and creating edges between nodes having a connective relationship (“The extracted semantics are represented as a graph G = (V,E). The vertices V have a type Tvertex reflecting a level, unit, zone, or room. The edges E also have different types Tedge indicating if the vertices are connected directly or are just adjacent, both of these relations are symmetric”, see for instance, page 96, section 3.3. “Graph Structure”, paragraph 1). While Ahmed teaches segmenting different types of information for floor plans for a given floor, such as walls, symbols, text, etc, Ahmed does not appear to teach applying object detection to the floorplan image to identify one or more floors of the floorplan image In the same art of floor plans, Stockton teaches that a floorplan image can contain multiple floors, see for instance, Stockton, page 1. It would have been obvious to one of ordinary skill in the art having the teachings of Ahmed and Stockton in front of them before the effective filing date of the claimed invention to incorporate multiple floors on a plan as taught by Stockton into Ahmed’s architectural floor plan analysis system, as having plan have multiple floors, such as described by Stockton was well known at the time of the effective filing date invention and would have yielded predictable results in combination with Ahmed. The modification of Ahmed with Stockton would have explicitly allowed the floor plan discussed in Ahmed to contain multiple floors. The motivation for combining Ahmed with Stockton would have been to improve the user experience, enhance functionality by allowing for the use of known floor plan types, and a extend the floor plan of Ahmed to allow for multiple floors. While Ahmed in view of Stockton teach segmenting different types of information for floor plans for a given floor, such as walls, symbols, text, etc, they do not appear to explicitly teach applying object detection to the floorplan image to identify one or more floors of the floorplan image. In the same art of architecture, Wessel teaches the extraction of room connectivity graphs consists of three steps, the detection of building stories, the determination of story rooms, and finally the determination of doors and windows connecting these rooms, see for instance, section 3 ‘Room Connectivity Graph Extraction’. It would have been obvious to one of ordinary skill in the art having the teachings of Ahmed, Stockton, and Wessel in front of them before the effective filing date of the claimed invention to incorporate detecting of building stories as taught by Wessel into Ahmed’s architectural floor plan analysis system, as detecting the number of floors, such as described by Wessel was well known at the time of the effective filing date invention and would have yielded predictable results in combination with Ahmed and Stockton. The modification of Ahmed and Stockton with Wessel would have explicitly allowed applying object detection to the floorplan image to identify one or more floors of the floorplan image. The motivation for combining Ahmed and Stockton with Wessel would have been to improve the user experience, enhance functionality, and to use references familiar to Ahmed (see section 2.4 of Ahmed). Regarding claim 2, Ahmed in view of Stockton in view of Wessel teach the system of claim 1 and further teach wherein the processor applies optical character recognition to floorplan image to extract the text therefrom (First, text/graphics segmentation is performed using the methods presented by Ahmed et al. (2011b). This method is based on method by Tombre et al. (2002) with number of improvements specifically floor the floor plans. Text/graphics segmentation separate the text from the graphics in the floor plan image”, see for instance, Ahmed, page 94, section 3.1.1. “Information Segmentation”, paragraph 2 “After extraction of the room text, horizontal and vertical smearing is performed on the extracted text to merge the neighboring characters, resulting in the bounds for words. Using the bounding boxes all the words are rotated to a horizontal direction and OCR is performed on them. The OCR result is then compared to rooms title dictionary and the closest title according to the Levenshtein distance is assigned to the room”, see for instance, Ahmed, page 95, section 3.1.3., Semantic Analysis, paragraph 3). The motivation to combine Ahmed, Stockton, and Wessel is the same as that which was set forth in claim 1. Regarding claim 3, Ahmed in view of Stockton in further view of Wessel teach the system of claim 1, and further teach wherein the processor applies a bounding box to each of the one or more floors of the floorplan image (“After extraction of the room text, horizontal and vertical smearing is performed on the extracted text to merge the neighboring characters, resulting in the bounds for words. Using the bounding boxes all the words are rotated to a horizontal direction and OCR is performed on them. The OCR result is then compared to rooms title dictionary and the closest title according to the Levenshtein distance is assigned to the room”, see for instance, Ahmed, page 95, section 3.1.3., Semantic Analysis, paragraph 3. Thick lines can be used to construct the boundary of the building, see for instance, Ahmed, page 94, section 3.1.1 Information Segmentation, paragraph 4).The motivation to combine Ahmed, Stockton, and Wessel is the same as that which was set forth in claim 1. Regarding claim 6, Ahmed in view of Stockton in further view of Wessel teach the system of claim 1 and further teach wherein the processor generates a multi-attributed graph for each of the one or more identified floors of the floorplan image (“The extracted semantics are represented as a graph G = (V,E). The vertices V have a type Tvertex reflecting a level, unit, zone, or room. The edges E also have different types Tedge indicating if the vertices are connected directly or are just adjacent, both of these relations are symmetric”, see for instance, Ahmed, page 96, section 3.3. “Graph Structure”, paragraph 1). The motivation to combine Ahmed, Stockton, and Wessel is the same as that which was set forth in claim 1. Regarding claim 8, Ahmed in view of Stockton in further view of Wessel teach the system of claim 1 and further teach wherein the one or more nodes include associated node attributes, including one or more of an entity type, an entity size, and an entity floor (“The extracted semantics are represented as a graph G = (V,E). The vertices V have a type Tvertex reflecting a level, unit, zone, or room. The edges E also have different types Tedge indicating if the vertices are connected directly or are just adjacent, both of these relations are symmetric”, see for instance, Ahmed, page 96, section 3.3. “Graph Structure”, paragraph 1). The motivation to combine Ahmed, Stockton, and Wessel is the same as that which was set forth in claim 1. Regarding claim 9, Ahmed in view of Stockton in further view of Wessel teach the system of claim 1 and further teach wherein the edges include associated edge attributes, including a connectivity type (“The extracted semantics are represented as a graph G = (V,E). The vertices V have a type Tvertex reflecting a level, unit, zone, or room. The edges E also have different types Tedge indicating if the vertices are connected directly or are just adjacent, both of these relations are symmetric”, see for instance, Ahmed, page 96, section 3.3. “Graph Structure”, paragraph 1). The motivation to combine Ahmed, Stockton, and Wessel is the same as that which was set forth in claim 1. Regarding claim 10, Ahmed in view of Stockton in further view of Wessel teach the system of claim 9 and further teach wherein the edges indicate a directional connective relationship between adjoining rooms of a floor or a vertical connective relationship between vertically aligned rooms of different floors (“The edges E also have different types Tedge indicating if the vertices are connected directly or are just adjacent, both of these relations are symmetric”, see for instance, Ahmed, page 96, section 3.3. “Graph Structure”, paragraph 1). The motivation to combine Ahmed, Stockton, and Wessel is the same as that which was set forth in claim 1. Regarding claim 11, claim 11 is the method claim corresponding to the system claim 1 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 1. Regarding claim 12, claim 12 recites substantially similar subject as to that which is recited in claim 2 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 2. Regarding claim 13, claim 13 recites substantially similar subject as to that which is recited in claim 3 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 3. Regarding claim 16, claim 16 recites substantially similar subject as to that which is recited in claim 6 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 6. Regarding claim 18, claim 18 recites substantially similar subject as to that which is recited in claim 8 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 8. Regarding claim 19, claim 19 recites substantially similar subject as to that which is recited in claim 9 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 9. Regarding claim 20, claim 20 recites substantially similar subject as to that which is recited in claim 10 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 10. Claim(s) 4, 5, 14, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmed et al. (“Automatic Analysis and Sketch-Based Retrieval of Architectural Floor Plans”, 2014) in view of Stockton (“Building Designs by Stockton”, 2021) in further view of Wessel et al. (“The Room Connectivity Graph: Shape Retrieval in the Architectural Domain”, 2008), as applied to claims 1 and 11 above, in further view of Dodge et al. (“Parsing Floor Plan Images”, 2017). Regarding claim 4, Ahmed in view of Stockton in further view of Wessel teach the system of claim 1, but do not appear to explicitly teach wherein the processor extracts entity size information from the extracted text and associates the extracted entity size information with a recognized entity. In the same art of floor plan analysis, Dodge teaches architectural floor plans are scaled drawings of apartments or building layouts. They contain structural and semantic information, e.g., room types and sizes, and the locations of doors, windows, and fixtures...In this paper we instead focus on readily available floor plan images from real estate websites”, see for instance, page 358, section 1 Introduction, paragraph 1. Having extracted the sizes of walls from OCR, we are able to place furniture items into the model at the correct scale, see for instance, page 458, section 1 Introduction, paragraph 1. We combine three methods to extract geometric and semantic information: Wall segmentation, object detection, and optical character recognition (OCR), see for instance, page 358, section 3 Parsing floor plans, paragraph 1. Our input images may contain both English and Japanese text. We use the Google vision API for text detection and character recognition, which handles multiple languages”, see for instance, page 359, “Optical Character Recognition”, paragraph 1. Room sizes are read with OCR in the input image (left) and propagated to unlabeled rooms (right), see for instance, page 361, fig. 4. For our applications, the most important text is the room size, see for instance, page 361, Text detection performance, paragraph 1. “For each segmented room, we query the text information for the Japanese room measurement unit (Jo). We compute the relationship between the room size in physical units and pixels to compute the pixel density (in pixel/J). This pixel density can be used to compute an estimate of the area of rooms that are not labeled with physical units, and well as can be used to compute wall length”, see for instance, page 361, section 4.1 Applications: 3D Model Creation and Furniture Fitting, paragraph 2. From the floor plan we extract a parsed representation of wall locations, objects, and size information, see for instance, page 361, section 5 Conclusion, paragraph 2. It would have been obvious to one of ordinary skill in the art having the teachings of Ahmed, Stockton, Wessel, and Dodge in front of them before the effective filing date of the claimed invention to incorporate detecting entity size as taught by Dodge into Ahmed’s architectural floor plan analysis system, as detecting the room size, such as described by Dodge was well known at the time of the effective filing date invention and would have yielded predictable results in combination with Ahmed, Stockton, and Wessel. The modification of Ahmed, Stockton, and Wessel with Dodge would have explicitly allowed the processor to extract entity size information from the extracted text and associates the extracted entity size information with a recognized entity. The motivation for combining Ahmed, Stockton, and Wessel with Dodge would have been to improve the user experience, enhance functionality, and allowed the OCR described in Ahmed to explicitly extract the room size. Regarding claim 5, Ahmed in view of Stockton in further view of Wessel teach the system of claim 1, but do not appear to explicitly teach wherein the processor extracts entity size information from the extracted text and associates the extracted entity size information with a recognized entity. In the same art of floor plan analysis, Dodge teaches architectural floor plans are scaled drawings of apartments or building layouts. They contain structural and semantic information, e.g., room types and sizes, and the locations of doors, windows, and fixtures...In this paper we instead focus on readily available floor plan images from real estate websites”, see for instance, page 358, section 1 Introduction, paragraph 1. Having extracted the sizes of walls from OCR, we are able to place furniture items into the model at the correct scale, see for instance, page 458, section 1 Introduction, paragraph 1. We combine three methods to extract geometric and semantic information: Wall segmentation, object detection, and optical character recognition (OCR), see for instance, page 358, section 3 Parsing floor plans, paragraph 1. Our input images may contain both English and Japanese text. We use the Google vision API for text detection and character recognition, which handles multiple languages”, see for instance, page 359, “Optical Character Recognition”, paragraph 1. Room sizes are read with OCR in the input image (left) and propagated to unlabeled rooms (right), see for instance, page 361, fig. 4. For our applications, the most important text is the room size, see for instance, page 361, Text detection performance, paragraph 1. “For each segmented room, we query the text information for the Japanese room measurement unit (Jo). We compute the relationship between the room size in physical units and pixels to compute the pixel density (in pixel/J). This pixel density can be used to compute an estimate of the area of rooms that are not labeled with physical units, and well as can be used to compute wall length”, see for instance, page 361, section 4.1 Applications: 3D Model Creation and Furniture Fitting, paragraph 2. From the floor plan we extract a parsed representation of wall locations, objects, and size information, see for instance, page 361, section 5 Conclusion, paragraph 2. It would have been obvious to one of ordinary skill in the art having the teachings of Ahmed, Stockton, Wessel, and Dodge in front of them before the effective filing date of the claimed invention to incorporate detecting entity size as taught by Dodge into Ahmed’s architectural floor plan analysis system, as detecting the room size, such as described by Dodge was well known at the time of the effective filing date invention and would have yielded predictable results in combination with Ahmed, Stockton, and Wessel. The modification of Ahmed, Stockton, and Wessel with Dodge would have explicitly allowed the processor to extract entity size information from the extracted text and associates the extracted entity size information with a recognized entity. The motivation for combining Ahmed, Stockton, and Wessel with Dodge would have been to improve the user experience, enhance functionality, and allowed the OCR described in Ahmed to explicitly extract the room size. Regarding claim 14, claim 14 recites substantially similar subject as to that which is recited in claim 4 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 4. Regarding claim 15, claim 15 recites substantially similar subject as to that which is recited in claim 5 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 5. Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmed et al. (“Automatic Analysis and Sketch-Based Retrieval of Architectural Floor Plans”, 2014) in view of Stockton (“Building Designs by Stockton”, 2021) in further view of Wessel et al. (“The Room Connectivity Graph: Shape Retrieval in the Architectural Domain”, 2008), as applied to claims 6 and 16 above, in further view of Khosravan et al. (US Patent 11,830,135). Regarding claim 7, Ahmed in view of Stockton in further view of Wessel teach the system of claim 6, and further teach wherein the processor merges the multi-attributed graphs for each of the one or more identified floors of the floorplan image to generate a combined multi-attributed graph corresponding to all identified floors of the floorplan image (“The extracted semantics are represented as a graph G = (V,E). The vertices V have a type Tvertex reflecting a level, unit, zone, or room. The edges E also have different types Tedge indicating if the vertices are connected directly or are just adjacent, both of these relations are symmetric”, see for instance, Ahmed, page 96, section 3.3. “Graph Structure”, paragraph 1. “Future work should also consider building units that connect different stories. By that, the room connectivity graphs of each story will be interlinked by edges representing staircases and elevator shafts”, see for instance, Wessel, page 80, section 6 Conclusion, paragraph 2). The motivation to combine Ahmed, Stockton, and Wessel is the same as that which was set forth in claim 1. While Ahmed in view of Stockton in further view of Wessel teach the broadest reasonable interpretation of claim 7, Khosravan is being brought into the rejection to explicitly teach the implementation of a multi-attributed graph corresponding to multiple floors. In the same art of floor plans, Khosravan teaches a multi-attributed graph corresponding to multiple floors of a floor plan, see for instance, column 22, lines 36-67 and fig. 2E. It would have been obvious to one of ordinary skill in the art having the teachings of Ahmed, Stockton, Wessel, and Khosravan in front of them before the effective filing date of the claimed invention to incorporate as taught by Khosravan into Ahmed’s architectural floor plan analysis system, as detecting the room size, such as described by Khosravan was well known at the time of the effective filing date invention and would have yielded predictable results in combination with Ahmed, Stockton, and Wessel. The modification of Ahmed, Stockton, and Wessel with Khosravan would have explicitly allowed the processor to merge the multi-attributed graphs for each of the one or more identified floors of the floorplan image to generate a combined multi-attributed graph corresponding to all identified floors of the floorplan image. The motivation for combining Ahmed, Stockton, and Wessel with Khosravan would have been to improve the user experience, enhance functionality, and allowed the future work described by Wessel to be implemented in the system. Regarding claim 17, claim 17 recites substantially similar subject as to that which is recited in claim 7 and is accordingly rejected using substantially similar rationale as to that which was set forth with respect to claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pizarro et al. teaches “Automatic Floor Plan Analysis and Recognition. Pizarro further teaches implementing OCR to recognize room size and place further scaled to the scene, see for instance, page 11, paragraph 2. Multi-modal information of a floor plan, such as room structure, type, symbols, text, and scale to recognize and reconstruct its elements, see for instance, page 13, paragraph 2. Yolov4 is employed to detect the ROIs alongside the text, number, and symbols containing semantic and contextual information like room types, dimensions, or areas, see for instance, page 13, paragraph 2. Floor plans might also include outer and inner walls, windows, furniture, dimension lines, grids, text, or icons, alongside the constraints and relationships between them, see for instance, page 1, paragraph 1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J COBB whose telephone number is (571)270-3875. The examiner can normally be reached Monday - Friday, 11am - 7pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J COBB/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Nov 22, 2023
Application Filed
Feb 05, 2024
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597182
DATA INTERPOLATION PLATFORM FOR GENERATING PREDICTIVE AND INTERPOLATED PRICING DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12586321
AUTOMATED MEASUREMENT OF INTERIOR SPACES THROUGH GUIDED MODELING OF DIMENSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579736
METHOD AND DEVICE FOR GENERATING THREE-DIMENSIONAL IMAGE BY USING PLURALITY OF CAMERAS
2y 5m to grant Granted Mar 17, 2026
Patent 12561105
ONLINE ELECTRONIC WHITEBOARD CONTENT SYNCHRONIZATION AND SHARING SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12561859
Method and System for Visualizing a Graph
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+37.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month