Prosecution Insights
Last updated: April 19, 2026
Application No. 19/234,078

AI ASSISTED REAL ESTATE LISTING

Non-Final OA §101§103
Filed
Jun 10, 2025
Examiner
CRANDALL, RICHARD W.
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Corelogic Solutions LLC
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
3y 1m
To Grant
64%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
90 granted / 301 resolved
-22.1% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
343
Total Applications
across all art units

Statute-Specific Performance

§101
34.6%
-5.4% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 301 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office action is in response to correspondence received July 3, 2025. Claims 1-20 are pending and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s): Claims 1, 11, and 16, which are similar in scope: A method comprising: receiving property information; generating a prompt for a machine learning model comprising at least a portion of the property information; generate property listing information; transmitting the property listing information; based on transmitting the property listing information, receiving a response associated with the property listing information; based on the response, modifying the prompt to generate a modified prompt; generate updated property listing information; and automatically updating a property listing based on the updated property listing information. Applicant is reciting a certain method of organizing human activity – commercial interaction. Practicing real estate, of which listing which is equivalent to saying “for sale” or “for rent” has been found to be an abstract idea in the following decisions: BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018); See MPEP 2106.04(a)(2)(II)(C). Fort Properties, Inc. v. American Master Lease, LLC, 671 F.3d 1317, 101 USPQ2d 1785 (Fed Cir. 2012); See MPEP 2106.04(a)(2)(II)(B). The above steps describe steps for listing. Automatically under a broadest reasonable interpretation signifies a step done as a rule. The prompts are understood to be plain English instructions. As the steps above describe real estate property listing, which is a commercial interaction; and wherein the subject matter has been found patent ineligible in the above decisions, as this is real estate, claims 1, 11, and 16 recite a patent ineligible abstract idea. This judicial exception is not integrated into a practical application. Claims 1, 11, and 16 recite a combination of additional elements that are generic computing elements and applied machine learning. See MPEP 2106.05(f)(2). The combination is not a practical application because they are apply it limitations, the combination being a computer or computers, which are understood to run machine learning. The machine learning here is explicitly applied, but also recited at a high level which indicates that ML is being prompted, which is the common term for writing, in plain English, a request to an off-the-shelf (or on-the-app-store) machine learning tool like ChatGPT or Gemini. Therefore the combination of additional elements amounts to a computer, like a smartphone or laptop, running an ML app or webpage. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, for the same reasons that there is not a practical application, there is not significantly more. The reasoning above is carried over. The claims recite a combination of additional elements that amount to instructions to be applied to the abstract idea. Therefore, the claims are not significantly more than the abstract idea. Per the dependent claims: Per claims 2, 12, 17, which are similar in scope, the abstract idea is further defined by receiving image information of a property and generating information. Causing display and a graphical user interface are apply it steps as a smartphone/laptop/desktop has a screen/monitor that in its ordinary capacity displays images. See MPEP 2106.05(f)(2). Per claims 3 and 15, which are similar in scope, using LIDAR source is using a source of information a further abstract idea element (see Electric Power Group, collecting, analyzing, and displaying the results of the analysis). Per claim 4, generating a floor plan is a further abstract idea element as this can be done with pen and paper, or also is a part of the commercial interaction of real estate listings. One can simply look at an image and draw the floor plan. Per claims 5 and 13, which are similar in scope, similar to claim 4, one can receive an image of a room (3D) and map it to 2D, as part of generating a listing or pen and paper, a mental process, then receive a second image and repeat, and combine. As there is no technical detail as to how images are combined (cf. McRO), only that they are combined, any combination of images is at most an apply it step. See MPEP 2106.05(f)(1) (desired outcome or result claimed). Per claims 6 and 14, which are similar in scope, images are a part of the abstract idea and the devices that generate them are operating in their ordinary capacity and are apply it elements, and therefore not a practical application of the abstract idea. Per claim 7, identifying fixtures in a room further defines the abstract idea of claim 1. Per claims 8 and 18, which are similar in scope, apply it limitations about machine learning are claimed as the machine learning, which is recited as “trained to identify fixtures” is then used to do the same. This is the machine learning model being used in its ordinary capacity, see MPEP 2106.05(f)(2). Per claims 9 and 19, which are similar in scope, similarly to claims 8 and 18, the machine learning is trained to identify fixtures based on multimodal input information, or a second model on audio information, and each is used to perform this step, which is reciting that these machine learning models are operating in their ordinary capacity. See MPEP 2106.05(f)(2). Per claims 10 and 20, which are similar in scope, the real estate abstract idea is further defined with information being in different “modalities.” Therefore, claims 1-20 are rejected under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 10-12, 16, 17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yakoel et al., WO 2024028862 A1 (“Yakoel”) in view of Jason the Flynn, “A.I. Prompt Engineering Made Simple (for Real Estate Agents!)” Youtube.com, published Oct 11, 2023 Available at: < https://www.youtube.com/watch?v=uClARhNVSlU > (“Jason”). Per claims 1, 11, and 16, which are similar in scope, Yakoel teaches computer-implemented method comprising: receiving property information; in page 16: “In step 120a, basic data regarding the available property may be retrieved. According to some embodiments, basic data such as but not limited to the address of the available property, the size of the available property, number of rooms in the available property, date of availability, owner of the available property, amenities associated with the available property etc. Each possibility and combination of possibilities is a separate embodiment. According to some embodiments, the basic data retrieved from structures data sources such as from the obtained property listing and/or from a plurality of additional property listings directed to the same available property listing.” Yakoel then teaches based on transmitting the property listing information, receiving a response associated with the property listing information in page 19: “According to some embodiments, when more than one property listing is received, an algorithm may be applied which algorithm may be configured to correlate and/or compare the data in the listing and, based thereon, clean up/identify any duplication and/or contradictions between the listings. According to some embodiments, when contradictions are identified, the algorithm may be configured to issue an alert, request a user input. Alternatively, the algorithm may automatically select one option over another, based, for example, on a predetermined reliability of one source over another, and/or based on the number of listings indicating a same option. According to some embodiments, when more than one property listing is obtained, the listings may be provided in different formats, in which case the algorithm may be further configured to format all listings and/or listing data into a same standard format.” Yakoel then teaches and automatically updating a property listing based on the updated property listing information in page 22: “Reference is now made to FIG. 2b, which schematically shows an outline of the herein disclosed Al-based platform for generating an enriched property listing, according to some embodiments. The outline disclosed in FIG. 2b is similar to the outline of FIG. 2a, however, the big-data analytics and or machine learning (ME) algorithms are applied on the extracted features to provide an adjusted marginal price of the property, which may then be presented in the enriched property listing. According to some embodiments, the adjusted marginal price may be computed by applying big-data analysis and/or machine learning models on a subset of the features, such as features directly and/or specifically relating to the property, referred to herein as property specific features (as opposed to, for example, neighborhood features).” Yakoel does not teach generating a prompt for a machine learning model comprising at least a portion of the property information; applying the prompt as input to a machine learning model to cause the machine learning model to generate property listing information; transmitting the property listing information; based on the response, modifying the prompt to generate a modified prompt; applying the modified prompt as input to the machine learning model to cause the machine learning model to generate updated property listing information; Jason teaches instructions for using ChatGPT to generate and modify a property listing. See Minutes 0:00 – 10:00. Jason then teaches generating a prompt for a machine learning model comprising at least a portion of the property information; in page 1 of PDF and minutes 5:00 – 8:30 of the youtube. Here the context and task as explained and shown is generating a prompt for a machine learning model. Jason then teaches applying the prompt as input to a machine learning model to cause the machine learning model to generate property listing information in 8:30 (page 1) where he is telling ChatGPT, which is applying the prompt as input to a machine learning model, wherein ChatGPT is a machine learning model, and it generates property listing information. Jason then teaches transmitting the property listing information in 8:30 where ChatGPT responds with the property listing information, where it is transmitted from ChatGPT to the user. Jason then teaches based on the response, modifying the prompt to generate a modified prompt; in page 2 and minutes 8:35 – 9:10 where the response is “make it a little shorter.” This modifies the prompt as it is a new prompt, generating a modified prompt that says, “make it a little shorter.” Jason then teaches applying the modified prompt as input to the machine learning model to cause the machine learning model to generate updated property listing information; In page 2, this is entered into ChatGPT and this causes ChatGPT to generate updated property listing information, namely, a shorter listing It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the enhanced updating of a property listing of Yakoel with the using prompts to update a property listing teaching of Jason because in 4:00-5:00 Jason teaches that simply using plain English one can get what you are looking for in developing a property listing to your liking. As this a taught benefit of using ChatGPT and similar generative AI, one would be motivated to combine Yakoel with Jason so that one could use plain English to get the results they want and update the listings they want. Per claims 2, 12, and 17, which are similar in scope, Yakoel and Jason teach the limitations of 1, 11, and 16, above. Yakoel further teaches generating information to cause display of a graphical user interface; and receiving, via the graphical user interface, an image comprising image information for a portion of a property, wherein the property information comprises the image information in Fig. 5 where a picture is included in the property listing. Per claims 10 and 20, which are similar in scope, Yakoel and Jason teach the limitations of claims 1 and 16, above. Yakoel further teaches the response comprises second property information associated with the property information, wherein the property information is in a first modality, and wherein the second property information is in a second modality different from the first modality in page 19: “According to some embodiments, when more than one property listing is received, an algorithm may be applied which algorithm may be configured to correlate and/or compare the data in the listing and, based thereon, clean up/identify any duplication and/or contradictions between the listings. According to some embodiments, when contradictions are identified, the algorithm may be configured to issue an alert, request a user input. Alternatively, the algorithm may automatically select one option over another, based, for example, on a predetermined reliability of one source over another, and/or based on the number of listings indicating a same option. According to some embodiments, when more than one property listing is obtained, the listings may be provided in different formats, in which case the algorithm may be further configured to format all listings and/or listing data into a same standard format.” Claim(s) 3 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yakoel et al., WO 2024028862 A1 (“Yakoel”) in view of Jason the Flynn, “A.I. Prompt Engineering Made Simple (for Real Estate Agents!)” Youtube.com, published Oct 11, 2023 Available at: < https://www.youtube.com/watch?v=uClARhNVSlU > (“Jason”), further in view of Rawat et al., US PGPUB 20240412306 A1 (“Rawat”). Per claims 3 and 15, which are similar in scope, Yakoel and Jason teach the limitations of claims 2 and 12, above. Yakoel does not teach wherein the image information comprises LIDAR information generated by a LIDAR of a user device Rawat teaches uploading a database for real estate property. See abstract. Rawat teaches wherein the image information comprises LIDAR information generated by a LIDAR of a user device in par 029: “FIG. 7 is a flow diagram illustrating an embodiment of a process for updating a database. In some embodiments, the process of FIG. 7 is executed by a real estate server (e.g., real estate server 106 of FIG. 1). In the example shown, in 700, image data is received. In various embodiments, image data is received from a real estate customer, a real estate agent, a 3.sup.rd party listing provider, or any other appropriate real estate system user. In some embodiments, image data is associated with a real estate property. In some embodiments, image data comprises three-dimensional image data (e.g., interior three-dimensional data or exterior three-dimensional data). In some embodiments, three-dimensional image data comprises light detection and ranging (e.g., LIDAR) data.” That they are generated by a LIDAR of a User device is taught in par 039: “Visual Search or Similarity search (e.g., a search where the query is an image)—for example, a search is initiated using an image (e.g., an image taken by a user device); the system determines tags for the image or a visual representation(s) for similarity, the tags and/or the visual representation(s) are used to initiate a search, results returned based on matches to tags or attributes associated with properties in the database, images and/or their associated properties are provided to the initiator of the search using the image. In some embodiments, the user of the system can query the database to search for a property or an image using a photo (e.g., a photo captured by a device). For example, scenarios using a photo query include: scenario 1) consumer is visiting an open house, likes the kitchen, consumer uses an app on their mobile device to capture an image of the kitchen, which is then sent to a server for processing, the photo is analyzed and a result set of homes is returned with similar kitchens to the consumer; and scenario 2) same as scenario #1, however focused on external features of the house—for example a photo of the front of the home and we return result set of homes with similar architectural style.” Claim(s) 4-9, 13, 14, 18, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yakoel et al., WO 2024028862 A1 (“Yakoel”) in view of Jason the Flynn, “A.I. Prompt Engineering Made Simple (for Real Estate Agents!)” Youtube.com, published Oct 11, 2023 Available at: < https://www.youtube.com/watch?v=uClARhNVSlU > (“Jason”), further in view of Lambert et al., US PGPUB 20230138762 A1 (“Lambert”). Per claim 4, Yakoel and Jason teach the limitations of claim 2, above. Yakoel does not teach generating a floor plan based on image information, wherein the floor plan is generated, based in part, on the image, and wherein the property information further comprises the floor plan. Lambert teaches automated generation of a floor plan from panorama images. See abstract. Lambert teaches generating a floor plan based on image information, wherein the floor plan is generated, based in part, on the image, and wherein the property information further comprises the floor plan in par 034: “FIGS. 2A-2S illustrate examples of automated operations for analyzing visual data of images captured in multiple rooms of a building to generate a floor plan for the building based at least in part on using visual data of the images to align pairs of images that have little or no overlapping visual coverage, and for generating and presenting information about the floor plan for the building, such as based on target images captured within the building 198 of FIG. 1B.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the listing enhancement of Yakoel with the generating a floor plan based on images teaching of Lambert because Lambert teaches in par 003: “it may be desirable to view information about the interior of a house, office, or other building without having to physically travel to and enter the building, including to determine actual as-built information about the building rather than design information from before the building is constructed. However, it can be difficult to effectively capture, represent and use such building interior information, including to display visual information captured within building interiors to users at remote locations (e.g., to enable a user to fully understand the layout and other details of the interior, including to control the display in a user-selected manner).” Lambert’s teachings overcome this technical hurdle of having to be somewhere to generate a floor plan by allowing for images to create the floor plan. One would be motivated to combine Lambert with Yakoel so that one could have more property information relevant to a listing. For these reasons one would be motivated to modify Yakoel with Lambert. Per claims 5 and 13, which are similar in scope, Yakoel and Jones teach the limitations of claims 2 and 12, above. Yakoel does not teach receiving a first image comprising image information for a first portion of a room of a structure generating a first portion of a model of the room based on the first image based on the generated first portion, displaying, via the user interface, an instruction indicating a user is to capture a second image comprising image information for a second portion of the room, wherein at least a portion of the second image is different from the first image receiving the second image generating a second portion of the model of the room based in part on the second image and the first portion of the model combining the first portion and the second portion of the model of the room to generate a combined model and displaying, via the user interface, the combined model Lambert teaches receiving a first image comprising image information for a first portion of a room of a structure in par 086: “The illustrated embodiment of the routine begins at block 505, where information or instructions are received. The routine continues to block 510 to determine whether to generate a floor plan for an indicated building, and if not continues to block 590. Otherwise, the routine continues to block 515 to obtain the target images for the building and optionally associated dimension/scale information (e.g., to retrieve stored target images that were previously acquired; to use target images supplied in block 505; to concurrently acquire such information, with FIG. 4 providing one example embodiment of an ICA system routine for performing such image acquisition, including optionally waiting for one or more users or devices to move throughout one or more rooms of the building and acquire panoramas or other images at acquisition locations in building rooms and optionally other building areas, and optionally along with metadata information regarding the acquisition and/or interconnection information related to movement between acquisition locations, as discussed in greater detail elsewhere herein).” Lambert then teaches generating a first portion of a model of the room based on the first image in par 087: “After block 515, the routine continues to block 520, where it uses a trained layout determination machine learning model (e.g., as part of a neural network) to, for each target image, estimate structural layout information for one or more rooms visible in the target image, including to identify wall elements such as windows, doorways and non-doorway openings, and to determine an image pose within the structural layout, and optionally learn floor and/or ceiling features for use in subsequent rendered views. After block 520, the routine continues to block 525, where it renders, for each target image, one or more views of floor and/or ceilings visible in the target image, such as to use RGB visual data and determined estimated monocular depth data to texture-map the views, and/or to overlay locations of wall elements and/or other semantic information, etc.” See also par 075: “The MIGM system 340 may further, during its operation, store and/or retrieve various types of data on storage 320 (e.g., in one or more databases or other data structures), such as information 321 about target panorama images (e.g., acquired by one or more camera devices 375) and associated rendered rectilinear view images 323, information 325 about determined structural room layout for the target panorama images (e.g., locations of walls and other structural elements, locations of structural wall elements, image acquisition pose information, etc.), various types of floor plan information and other building mapping information 326 (e.g., generated and saved 2D floor plans with 2D room shapes and positions of wall elements and other elements on those floor plans and optionally additional information such as building and room dimensions for use with associated floor plans, existing images with specified positions, annotation information, etc.; generated and saved 2.5D and/or 3D model floor plans that are similar to the 2D floor plans but further include height information and 3D room shapes; etc.),” Lambert then teaches based on the generated first portion, displaying, via the user interface, an instruction indicating a user is to capture a second image comprising image information for a second portion of the room, wherein at least a portion of the second image is different from the first image in par 081: “In addition, the routine in some embodiments may further optionally determine and provide one or more guidance cues to the user regarding the motion of the mobile device, quality of the sensor data and/or visual information being captured during movement to the next acquisition location (e.g., by monitoring the movement of the mobile device), including information about associated lighting/environmental conditions, advisability of capturing a next acquisition location, and any other suitable aspects of capturing the linking information.” See also par 082: “or example, the ICA system may provide one or more notifications to the user regarding the information acquired during capture of the multiple acquisition locations and optionally corresponding linking information, such as if it determines that one or more segments of the recorded information are of insufficient or undesirable quality, or do not appear to provide complete coverage of the building. In addition, in at least some embodiments, if minimum criteria for images (e.g., a minimum quantity and/or type of images) have not been satisfied by the captured images (e.g., at least two panorama images in each room, panorama images within a maximum specified distance of each other, etc.), the ICA system may prompt or direct the acquisition of additional panorama images to satisfy such criteria.” Lambert then teaches receiving the second image in par 081: “In block 424, the routine then determines that the mobile computing device (and one or more associated camera devices) arrived at the next acquisition location (e.g., based on an indication from the user, based on the forward movement of the user stopping for at least a predefined amount of time, etc.), for use as the new current acquisition location, and returns to block 415 in order to perform the image acquisition activities for the new current acquisition location.” See also par 082: “or example, the ICA system may provide one or more notifications to the user regarding the information acquired during capture of the multiple acquisition locations and optionally corresponding linking information, such as if it determines that one or more segments of the recorded information are of insufficient or undesirable quality, or do not appear to provide complete coverage of the building. In addition, in at least some embodiments, if minimum criteria for images (e.g., a minimum quantity and/or type of images) have not been satisfied by the captured images (e.g., at least two panorama images in each room, panorama images within a maximum specified distance of each other, etc.), the ICA system may prompt or direct the acquisition of additional panorama images to satisfy such criteria.” Lambert then teaches generating a second portion of the model of the room based in part on the second image and the first portion of the model in par 087: “After block 520, the routine continues to block 525, where it renders, for each target image, one or more views of floor and/or ceilings visible in the target image, such as to use RGB visual data and determined estimated monocular depth data to texture-map the views, and/or to overlay locations of wall elements and/or other semantic information, etc.” Lambert then teaches combining the first portion and the second portion of the model of the room to generate a combined model in par 088: “After block 525, the routine continues to block 530 to select a next pair of the target images (beginning with a first), and then proceeds to block 535 to attempt to determine one or more potential alignments between the visual data of the target images (e.g., based at least in part on matching structural layout information for the target images, by using determined target image pose information, etc.). After block 535, the routine in block 540 proceeds to use a trained machine learning model (e.g., as part of convolutional neural network) to attempt to validate a potential alignment between the two images of the pair using their rendered views, along with generating a corresponding alignment score or other alignment indication. After block 540, the routine continues to block 555, where it determines if there are more pairs of images to compare, and if so returns to block 530 to select a next pair of images. Otherwise, the routine continues to block 560 where it reviews the alignment scores for the potential alignments of the image pairs, and discards potential alignments according to one or more defined criteria (e.g., alignment scores below a determined threshold).” Lambert then teaches and displaying, via the user interface, the combined model in par 089: “such as to provide the generated 2D floor plan and/or 3D computer model floor plan for display on one or more client devices and/or to one or more other devices for use in automating navigation of those devices and/or associated vehicles or other entities, to provide and use information about determined room layouts/shapes and/or a linked set of panorama images and/or about additional information determined about contents of rooms and/or passages between rooms, etc.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the listing enhancement of Yakoel with the generating a floor plan based on images teaching of Lambert because Lambert teaches in par 003: “it may be desirable to view information about the interior of a house, office, or other building without having to physically travel to and enter the building, including to determine actual as-built information about the building rather than design information from before the building is constructed. However, it can be difficult to effectively capture, represent and use such building interior information, including to display visual information captured within building interiors to users at remote locations (e.g., to enable a user to fully understand the layout and other details of the interior, including to control the display in a user-selected manner).” Lambert’s teachings overcome this technical hurdle of having to be somewhere to generate a floor plan by allowing for images to create the floor plan. One would be motivated to combine Lambert with Yakoel so that one could have more property information relevant to a listing. For these reasons one would be motivated to modify Yakoel with Lambert. Per claims 6 and 14, which are similar in scope, Yakoel, Jones, and Lambert teach the limitations of claims 5 and 13, above. Yakoel does not teach wherein the first image comprises first image information generated by a camera, and wherein the second image comprises second image information generated by a LIDAR. Lambert teaches wherein the first image comprises first image information generated by a camera, and wherein the second image comprises second image information generated by a LIDAR in par 027: “The mobile computing device 185 of the user may include various hardware components, such as a camera or other imaging system 135, one or more sensors 148 (e.g., a gyroscope 148a, an accelerometer 148b, a compass 148c, etc., such as part of one or more IMUs, or inertial measurement units, of the mobile device; an altimeter; light detector; etc.), one or more hardware processors 132, memory 152, a display 142, optionally a GPS receiver, and optionally other components that are not shown (e.g., additional non-volatile storage; transmission capabilities to interact with other devices over the network(s) 170 and/or via direct device-to-device communication, such as with an associated camera device 184 or a remote server computing system 180; one or more external lights; a microphone, etc.)—however, in some embodiments, the mobile device may not have access to or use hardware equipment to measure the depth of objects in the building relative to a location of the mobile device (such that relationships between different panorama images and their acquisition locations may be determined in part or in whole based on analysis of the visual data of the images, and optionally in some such embodiments by further using information from other of the listed hardware components (e.g., IMU sensors 148), but without using any data from any such depth sensors), while in other embodiments the mobile device may have one or more distance-measuring sensors or other depth-sensing sensors/devices 136 (e.g., using lidar or other laser rangefinding techniques, structured light, synthetic aperture radar or other types of radar, etc.) used to measure depth to surrounding walls and other surrounding objects that may be supplied to a trained machine learning model (e.g., as part of a convolutional neural network or other type of neural network, as part of a vision image transformer network, etc.)” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the listing enhancement of Yakoel with the generating a floor plan based on images teaching of Lambert because Lambert teaches in par 003: “it may be desirable to view information about the interior of a house, office, or other building without having to physically travel to and enter the building, including to determine actual as-built information about the building rather than design information from before the building is constructed. However, it can be difficult to effectively capture, represent and use such building interior information, including to display visual information captured within building interiors to users at remote locations (e.g., to enable a user to fully understand the layout and other details of the interior, including to control the display in a user-selected manner).” Lambert’s teachings overcome this technical hurdle of having to be somewhere to generate a floor plan by allowing for images to create the floor plan. One would be motivated to combine Lambert with Yakoel so that one could have more property information relevant to a listing. For these reasons one would be motivated to modify Yakoel with Lambert. Per claim 7, Yakoel, Jones, and Lambert teach the limitations of claim 6, above. Yakoel does not teach identifying, based on at least one of the first image or the second image, at least one of an appliance or fixture in the room. Lambert teaches identifying, based on at least one of the first image or the second image, at least one of an appliance or fixture in the room in par 051: “For example, floor plan 230k includes additional information of various types, such as may be automatically identified from analysis operations of visual data from images and/or from depth data, including one or more of the following types of information: room labels (e.g., “living room” for the living room), room dimensions, visual indications of fixtures or appliances or other built-in features, visual indications of positions of additional types of associated and linked information (e.g., of panorama images and/or perspective images acquired at specified acquisition positions, which an end user may select for further display.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the listing enhancement of Yakoel with the generating a floor plan based on images teaching of Lambert because Lambert teaches in par 003: “it may be desirable to view information about the interior of a house, office, or other building without having to physically travel to and enter the building, including to determine actual as-built information about the building rather than design information from before the building is constructed. However, it can be difficult to effectively capture, represent and use such building interior information, including to display visual information captured within building interiors to users at remote locations (e.g., to enable a user to fully understand the layout and other details of the interior, including to control the display in a user-selected manner).” Lambert’s teachings overcome this technical hurdle of having to be somewhere to generate a floor plan by allowing for images to create the floor plan. One would be motivated to combine Lambert with Yakoel so that one could have more property information relevant to a listing. For these reasons one would be motivated to modify Yakoel with Lambert. Per claims 8 and 18, which are similar in scope, Yakoel and Jones teach the limitations of claims 2 and 17, above. Yakoel further teaches based on the [image], generating second property listing information in page 20: “According to some embodiments, the unstructured feed may be directly associated with the property, such as but not limited to, images, free texts descriptions of the property, social network posts (e.g. Facebook posts) regarding the property and the like or combinations thereof. Each possibility is a separate embodiment. According to some embodiments, the unstructured feed may also include feeds that are not directly associated with the property, such as, but not limited to newspaper articles related to the neighborhood of the property, social network feeds related to the neighborhood (e.g. a neighborhood profile), neighborhood images (e.g. from google street view), text/audio messages from other potential buyers/renters and/or from real estate agents, phone calls with potential buyers/renters and/or real-estate agents and the like or combinations thereof. Each possibility is a separate embodiment.” Then see page 23: “According to some embodiments, an enriched property listing (as generated for example as outlined with regards to FIGS. 2a and 2b, may be constantly or periodically (e.g. once a day or once a week) updated, for example, by applying a web crawler, (also referred to as a spider, spiderbot or crawler) to systematically browse the internet for new feeds (structured and unstructured), feedback from clients/agents (in the form of text/audio messages, phone calls, emails or the like), and data from public and/or private data providers. According to some embodiments, the newly gathered data may be analyzed in order to identify whether or not it includes new information, for example by applying NLP algorithms capable of identifying key content in the data. The above-described Al models may then be reapplied on the new data in order to update the enriched property listing, in order to update the computed predictions related to the property and/or to update the marginal price of the property accordingly.” Yakoel does not teach further comprising: applying the image information as input to a second machine learning model trained to identify fixtures represented in image information to cause the second machine learning to identify a fixture represented in the image; identified fixture Lambert teaches further comprising: applying the image information as input to a second machine learning model trained to identify fixtures represented in image information to cause the second machine learning to identify a fixture represented in the image; identified fixture in par 054: “A machine learning model (e.g., that is part of a convolutional neural network or other type of neural network, as part of a vision image transformer network, etc.) is trained in this example embodiment to identify aspects such as shared floor texture around room openings, common objects (e.g., refrigerators), or known priors on room adjacency (e.g., the fact that bathrooms and bedrooms are often adjacent), as well as other signals present in image data, such as light source reflections, paneling direction of wood flooring, shared ceiling features, etc.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the listing enhancement of Yakoel with the generating a floor plan based on images teaching of Lambert because Lambert teaches in par 003: “it may be desirable to view information about the interior of a house, office, or other building without having to physically travel to and enter the building, including to determine actual as-built information about the building rather than design information from before the building is constructed. However, it can be difficult to effectively capture, represent and use such building interior information, including to display visual information captured within building interiors to users at remote locations (e.g., to enable a user to fully understand the layout and other details of the interior, including to control the display in a user-selected manner).” Lambert’s teachings overcome this technical hurdle of having to be somewhere to generate a floor plan by allowing for images to create the floor plan. One would be motivated to combine Lambert with Yakoel so that one could have more property information relevant to a listing. For these reasons one would be motivated to modify Yakoel with Lambert. Per claims 9 and 19, Yakoel, Jones, and Lambert teach the limitations of claims 8 and 18, above. Yakoel does not teach receiving a user utterance as audio information, wherein the second machine learning model is a multimodal model, wherein the second machine learning model is further trained to identify fixtures based on multimodal input information, wherein applying the image information as input to the second machine learning model further comprises applying the audio information as input to the second machine learning model, and wherein the fixture is further identified based in part on the audio information Lambert teaches receiving a user utterance as audio information, in par 051: “of audio annotations and/or sound recordings that an end user may select for further presentation; etc.),” Lambert then teaches wherein the second machine learning model is a multimodal model, wherein the second machine learning model is further trained to identify fixtures based on multimodal input information, wherein applying the image information as input to the second machine learning model further comprises applying the audio information as input to the second machine learning model in par 075: “The MIGM system 340 may further, during its operation, store and/or retrieve various types of data on storage 320 (e.g., in one or more databases or other data structures), such as information 321 about target panorama images (e.g., acquired by one or more camera devices 375) and associated rendered rectilinear view images 323, information 325 about determined structural room layout for the target panorama images (e.g., locations of walls and other structural elements, locations of structural wall elements, image acquisition pose information, etc.), various types of floor plan information and other building mapping information 326 (e.g., generated and saved 2D floor plans with 2D room shapes and positions of wall elements and other elements on those floor plans and optionally additional information such as building and room dimensions for use with associated floor plans, existing images with specified positions, annotation information, etc.; generated and saved 2.5D and/or 3D model floor plans that are similar to the 2D floor plans but further include height information and 3D room shapes; etc.), optionally user information 322 about users of client computing devices 390 and/or operator users of mobile devices 360 who interact with the MIGM system, optionally training data for use with one or more machine learning models (e.g., as part of one or more convolutional neural networks or other neural networks, one or more vision image transformer network, etc.) and/or the resulting trained machine learning models (not shown), and optionally various other types of additional information 329.” In par 054: “A machine learning model (e.g., that is part of a convolutional neural network or other type of neural network, as part of a vision image transformer network, etc.) is trained in this example embodiment to identify aspects such as shared floor texture around room openings, common objects (e.g., refrigerators), or known priors on room adjacency (e.g., the fact that bathrooms and bedrooms are often adjacent), as well as other signals present in image data, such as light source reflections, paneling direction of wood flooring, shared ceiling features, etc. Examples of pairwise signals available from unaligned panorama input are enumerated in the table below, which are generally complementary (e.g., many weaknesses of monocular-depth signal can be alleviated by semantic segmentation information)—in many situations, no single signal is sufficient.” Lambert then teaches and wherein the fixture is further identified based in part on the audio information in par 051: “for example, floor plan 230k includes additional information of various types, such as may be automatically identified from analysis operations of visual data from images and/or from depth data, including one or more of the following types of information: room labels (e.g., “living room” for the living room), room dimensions, visual indications of fixtures or appliances or other built-in features, visual indications of positions of additional types of associated and linked information (e.g., of panorama images and/or perspective images acquired at specified acquisition positions, which an end user may select for further display; of audio annotations and/or sound recordings that an end user may select for further presentation; etc.), visual indications of doorways and windows, etc.—" It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the listing enhancement of Yakoel with the generating a floor plan based on images teaching of Lambert because Lambert teaches in par 003: “it may be desirable to view information about the interior of a house, office, or other building without having to physically travel to and enter the building, including to determine actual as-built information about the building rather than design information from before the building is constructed. However, it can be difficult to effectively capture, represent and use such building interior information, including to display visual information captured within building interiors to users at remote locations (e.g., to enable a user to fully understand the layout and other details of the interior, including to control the display in a user-selected manner).” Lambert’s teachings overcome this technical hurdle of having to be somewhere to generate a floor plan by allowing for images to create the floor plan. One would be motivated to combine Lambert with Yakoel so that one could have more property information relevant to a listing. For these reasons one would be motivated to modify Yakoel with Lambert. Therefore, claims 1-20 are rejected under 35 USC 103. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD W. CRANDALL whose telephone number is (313)446-6562. The examiner can normally be reached M - F, 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at (571) 270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD W. CRANDALL/ Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Jun 10, 2025
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602666
INFORMATION HANDLING SYSTEM MICRO MANUFACTURING CENTER FOR REUSE AND RECYCLING FACTORING INVENTORY
2y 5m to grant Granted Apr 14, 2026
Patent 12591589
DECENTRALIZED WILL MANAGEMENT APPARATUS, SYSTEMS AND RELATED METHODS OF USE
2y 5m to grant Granted Mar 31, 2026
Patent 12541382
USER PERSONA INJECTION FOR TASK-ORIENTED VIRTUAL ASSISTANTS
2y 5m to grant Granted Feb 03, 2026
Patent 12537090
METHOD AND SYSTEM FOR RULE-BASED ANONYMIZED DISPLAY AND DATA EXPORT
2y 5m to grant Granted Jan 27, 2026
Patent 12530694
USING ENTITLEMENTS DEPLOYED ON BLOCKCHAIN TO MANAGE CUSTOMER EXPERIENCES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
64%
With Interview (+33.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 301 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month