Prosecution Insights
Last updated: April 19, 2026
Application No. 19/004,768

VIRTUAL ROOM MAPPING BASED ON A 2D IMAGE

Non-Final OA §103
Filed
Dec 30, 2024
Examiner
IMPERIAL, JED-JUSTIN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Marxent Labs LLC
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
85%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
289 granted / 397 resolved
+10.8% vs TC avg
Moderate +12% lift
Without
With
+12.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
13 currently pending
Career history
410
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This office action is responsive to the RCE/amendment filed on 01/26/2026. Claim(s) 1-28 is/are pending in the application. Independent claim(s) 1, 13, 20 was/were amended. Dependent claim(s) 4, 16 was/were amended. Claim(s) 27-28 was/were added. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/26/2026 has been entered. Response to Arguments Applicant's argument(s), regarding the amended portion(s) as recited in independent claim 1 (and similarly in independent claim(s) 13, 20), filed 01/26/2026, have/has been fully considered and is/are persuasive. However, upon further consideration, a new ground(s) of rejection is made, adding/using Bradley and Mullins to be relied upon for the aforementioned amended portion(s). To note, applicant's amendment necessitated the new ground(s) of rejection presented in this office action. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1, 13, 20, 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bradley et al. (US 2021/0125398 A1) in view of Mullins et al. (US 2016/0247324 A1). In regards to claim 1, Bradley teaches a computer-implemented method for facilitating room design from an image of a physical space, the method being performed by one or more processors programmed with computer instructions which, when executed, cause the one or more processors to: determine dimensions of the physical space from a physical object depicted in a two dimensional image of the physical space, wherein determining the dimensions of the physical space from the physical object comprises identifying at least one known real-world dimension associated with the physical object and using the known real-world dimension as a scale reference to determine the dimensions of the physical space (e.g. [0035],Fig.2: at a first step S201, at least one image of a plurality of objects in a scene is obtained; [0048],Fig.2: at a second step S202, at least some of the objects in the at least one image are detected as corresponding to pre-determined objects; [0064],Fig.2: at a fourth step S204, a relative size is determined for each detected object as projected into 3D space; the relative size is determined for each object based on a distance between at least two points corresponding to that object as transformed into 3D space; [0071],Fig.2: at a fifth step S205, a size probability distribution function (size PDF) is obtained for each object detected in the at least two images; each size PDF defines a range of sizes in at least one dimension that the corresponding object is likely to possess in real world units (e.g. metres, cm, millimetres, etc.); [0083],Fig.2: at a sixth step S206, the size PDF obtained for each detected object is re-scaled based on a corresponding relative size of that object in the 3D reconstruction; [0086],Fig.2: at a seventh step S207, a geometry of the scene is determined in real world units; this involves combining the re-scaled probability distribution function for at least one detected object with the re-scaled probability distribution function for at least one other detected object; Examiner’s note: this shows that the geometry/dimensions (in real world units) are determined based on detected objects, their determined relative sizes as well as the obtained size PDFs of the objects, where the size PDF is viewed as a scale reference compared to the determined relative size); and generate a three dimensional virtual space that corresponds with dimensions of the physical space depicted in the two dimensional image comprising at least a virtual object with dimensions that correspond to dimensions of the physical object based on the determined dimensions of the physical space depicted in the two dimensional image (e.g. as above, [0086],Fig.2: at a seventh step S207, a geometry of the scene is determined in real world units; see also [0058],Fig.2: at a third step S203, a 3D reconstruction of the scene is generated based on the image content of the at least one image; even further [0095]: method may further comprise generating an image of a virtual object for display as part of at least one of an augmented, virtual and mixed reality environment; at least one of the size and position of the virtual object within the environment may correspond the size and/or position of the object in the real world (i.e. correspond with the determined real-world geometry); [0096]: method may further comprise generating and displaying an annotated image of the scene; the annotated image may correspond to … a 2D render of the 3D reconstruction of the scene), but does not explicitly teach the method, wherein the virtual object is generated by retrieving a corresponding three-dimensional model from an object database based on identification of the physical object in the two dimensional image. However, Mullins teaches a method, wherein the virtual object is generated by retrieving a corresponding three-dimensional model from an object database based on identification of the physical object in the two dimensional image (e.g. [0041]: the user 102 may direct a camera of the viewing device 101 to capture an image of the factory machine 114; the viewing device 101 identifies feature points (e.g. edges, corners, surface of the machine, unique spatial geometric patterns) in the picture or video frame of the factory machine 114 to identify the factory machine; viewing device 101 accesses a local library (e.g. local context recognition dataset or any other previously stored dataset of the AR application) of the viewing device 101 to retrieve a virtual object corresponding to the feature points; the local library may also include models of virtual objects associated with feature points of real-world physical objects or references). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Bradley to retrieve corresponding models, in the same conventional manner as taught by Mullins as both deal with generating virtual content based on identified objects in a captured scene for display. The motivation to combine the two would be that it would allow the retrieval of not only pre-made models of identified objects but also additional information pertaining to those objects when constructing a 3D scene. In regards to system claim 13 and product claim 20, claim(s) 13, 20 recite(s) limitations that is/are similar in scope to the limitations recited in claim 1. Therefore, claim(s) 13, 20 is/are subject to rejections under the same rationale as applied hereinabove for claim 1. To note, Bradely shows the use of processors in paragraph [0099] and a computer readable medium storing instructions in paragraph [0098]. In regards to claim 28, the combination of Bradley and Mullins teaches a method, wherein generating the three dimensional virtual space comprises: generating respective three dimensional virtual subspaces from a plurality of two dimensional images of the physical space captured from respective viewpoints and joining the three dimensional virtual subspaces by aligning them based on corresponding instances of the physical object depicted in the two dimensional image appearing in the plurality of two dimensional images (e.g. [0061]: In examples where at least two images are obtained at step S201, step S205 may involve determining for each pair of images, a corresponding fundamental matrix for that pair; The fundamental matrix for a respective image pair may then be used to map corresponding image points (corresponding to the same point in space, but from a different viewpoint) to a 3D space by performing a projective reconstruction; Examiner’s note: this shows that 3D data (subspace) is determined for each image, which is then combined/aligned based on corresponding points in space, which may include the detected objects). Claim(s) 2-7, 14-19, 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bradley and Mullins as applied to claims 1, 13 above, and further in view of Stekovic et al. (US 2021/0150805 A1). In regards to claim 2, the combination of Bradley and Mullins teaches a method, wherein the one or more processors are programmed with computer instructions to determine dimensions of the physical space from an object depicted in a two dimensional image of the physical space comprise further program instructions, when executed, cause the one or more processors to: determine dimensions of the object depicted in the two dimensional image (e.g. Bradley as above, [0064],Fig.2: at a fourth step S204, a relative size is determined for each detected object as projected into 3D space; the relative size is determined for each object based on a distance between at least two points corresponding to that object as transformed into 3D space; see also [0065]: a specific length of the object in one or more dimensions can be measured in the 3D reconstruction); and determine the dimensions of the physical space based on the determined dimensions of the object depicted in the two dimensional image (e.g. Bradley as above, [0071],Fig.2: at a fifth step S205, a size probability distribution function (size PDF) is obtained for each object detected in the at least two images; [0083],Fig.2: at a sixth step S206, the size PDF obtained for each detected object is re-scaled based on a corresponding relative size of that object in the 3D reconstruction; [0086],Fig.2: at a seventh step S207, a geometry of the scene is determined in real world units), but does not explicitly teach the method, comprising further program instructions, when executed, cause the one or more processors to: detect planes of the physical space depicted in the two dimensional image using image analysis. However, Stekovic teaches a method, comprising: detecting planes of the physical space depicted in the two dimensional image using image analysis (e.g. Abstract: one or more planes can be detected in an input image of an environment; see also [0109],Fig.14: at block 1402, the process 1400 includes detecting the one or more planes using a machine learning model, such as a convolutional neural network (CNN) trained to detect planes in images). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Bradley and Mullins to detect planes, in the same conventional manner as taught by Stekovic as both deal with constructing/generating 3D scenes. The motivation to combine the two would be that it would allow detections of planes/surfaces using image analysis for use in constructing the 3D scene. In regards to system claim 14, claim(s) 14 recite(s) limitations that is/are similar in scope to the limitations recited in claim 2. Therefore, claim(s) 14 is/are subject to rejections under the same rationale as applied hereinabove for claim 2. In regards to claim 3, the combination of Bradley, Mullins and Stekovic teaches a method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: extend at least one detected plane of the physical space that corresponds with a wall that is occluded by the object (e.g. Stekovic, [0069]: can infer the 3D planes of the layout from a monocular image with many objects occluding the layout structure, such as furniture in a room; Examiner’s note: this shows planes are extended beyond that of occluding objects). In addition, the same rationale/motivation of claim 2 is used for claim 3. In regards to system claim 15, claim(s) 15 recite(s) limitations that is/are similar in scope to the limitations recited in claim 3. Therefore, claim(s) 15 is/are subject to rejections under the same rationale as applied hereinabove for claim 3. In regards to claim 4, the combination of Bradley, Mullins and Stekovic teaches a method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: generate a corresponding image that depicts detected planes of the physical space (e.g. Stekovic as above, [0109],Fig.14: at block 1402, the process 1400 includes detecting the one or more planes using a machine learning model; see also [0120]: in some examples, the process 1400 includes generating an output image based on the three-dimensional layout of the environment). In addition, the same rationale/motivation of claim 3 is used for claim 4. In regards to system claim 16, claim(s) 16 recite(s) limitations that is/are similar in scope to the limitations recited in claim 4. Therefore, claim(s) 16 is/are subject to rejections under the same rationale as applied hereinabove for claim 4. In regards to claim 5, the combination of Bradley, Mullins and Stekovic teaches a method, wherein the one or more processors are programmed with computer instructions to determine dimensions of the object depicted in the two dimensional image using image analysis comprise further program instructions, when executed, cause the one or more processors to: identify one known dimension of the object depicted in the two dimensional image (e.g. Bradley as above, [0071],Fig.2: at a fifth step S205, a size probability distribution function (size PDF) is obtained for each object detected in the at least two images; each size PDF defines a range of sizes in at least one dimension that the corresponding object is likely to possess in real world units (e.g. metres, cm, millimetres, etc.); see also [0072]: size PDF may be obtained from e.g. a database that stores size PDFs for a plurality of everyday objects (e.g. people, items of furniture, etc.)). In regards to system claim 17, claim(s) 17 recite(s) limitations that is/are similar in scope to the limitations recited in claim 5. Therefore, claim(s) 17 is/are subject to rejections under the same rationale as applied hereinabove for claim 5. In regards to claim 6, the combination of Bradley, Mullins and Stekovic teaches a method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: determine dimensions of other objects depicted in the two dimensional image based on the identified one known dimension of the object (e.g. Bradley, [0092]: the relative size of each of the objects in the scene, as well as the distances of those objects from the camera(s), can be determined in real-world units; that is, an estimate of the real-world size of the objects, and any features measurable in the 3D reconstruction such as walls, floors, ceilings, etc. can be obtained; this may allow, for example, the measuring of objects or physical structures for which a corresponding size PDF was not obtained and re-scaled; Examiner’s note: this shows that dimensions of other objects may be determined based on the known dimensions of an object). In regards to system claim 18, claim(s) 18 recite(s) limitations that is/are similar in scope to the limitations recited in claim 6. Therefore, claim(s) 18 is/are subject to rejections under the same rationale as applied hereinabove for claim 6. In regards to claim 7, the combination of Bradley, Mullins and Stekovic teaches a method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: determine the dimensions of the physical space using relative dimensions of the determined dimensions of the other objects to the physical space (e.g. Bradley as above, [0064],Fig.2: at a fourth step S204, a relative size is determined for each detected object as projected into 3D space; [0071],Fig.2: at a fifth step S205, a size probability distribution function (size PDF) is obtained for each object detected in the at least two images; [0083],Fig.2: at a sixth step S206, the size PDF obtained for each detected object is re-scaled based on a corresponding relative size of that object in the 3D reconstruction; [0086],Fig.2: at a seventh step S207, a geometry of the scene is determined in real world units). In regards to system claim 19, claim(s) 19 recite(s) limitations that is/are similar in scope to the limitations recited in claim 7. Therefore, claim(s) 19 is/are subject to rejections under the same rationale as applied hereinabove for claim 7. In regards to claim 27, the combination of Bradley, Mullins and Stekovic teaches a method, wherein identifying the one known dimension of the object depicted in the two-dimensional image comprises: recognizing the object depicted in the two-dimensional image (e.g. Bradley as above, [0035],Fig.2: at a first step S201, at least one image of a plurality of objects in a scene is obtained; [0048],Fig.2: at a second step S202, at least some of the objects in the at least one image are detected as corresponding to pre-determined objects); and retrieving the one known dimension from an object database keyed to a product identifier of the object (e.g. Bradley as above, [0071],Fig.2: at a fifth step S205, a size probability distribution function (size PDF) is obtained for each object detected in the at least two images; each size PDF defines a range of sizes in at least one dimension that the corresponding object is likely to possess in real world units (e.g. metres, cm, millimetres, etc.); see also [0072]: size PDF may be obtained from e.g. a database that stores size PDFs for a plurality of everyday objects (e.g. people, items of furniture, etc.)). Claim(s) 8, 21-23, 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bradley and Mullins as applied to claim 1 above, and further in view of Besecker et al. (US 2019/0325643 A1). In regards to claim 8, the combination of Bradley and Mullins teaches the method of claim 1, wherein the one or more processors are programmed with computer instructions to generate a three dimensional virtual space based on the determined dimensions of the physical space depicted in the two dimensional image, wherein the three dimensional virtual space has dimensions corresponding with the determined dimensions of the physical space (e.g. Bradley as above, [0086],Fig.2: at a seventh step S207, a geometry of the scene is determined in real world units; see also [0058],Fig.2: at a third step S203, a 3D reconstruction of the scene is generated based on the image content of the at least one image; even further [0095]: method may further comprise generating an image of a virtual object for display as part of at least one of an augmented, virtual and mixed reality environment; at least one of the size and position of the virtual object within the environment may correspond the size and/or position of the object in the real world (i.e. correspond with the determined real-world geometry); [0096]: method may further comprise generating and displaying an annotated image of the scene; the annotated image may correspond to … a 2D render of the 3D reconstruction of the scene), but does not explicitly teach the method comprise further program instructions, when executed, cause the one or more processors to: populate the three dimensional virtual space with one or more virtual objects that correspond to one or more objects in the physical space depicted in the two dimensional image, and wherein the one or more virtual objects have dimensions corresponding with dimensions of the one or more objects in the physical space. However, Besecker teaches a method, comprising: populating the three dimensional virtual space with one or more virtual objects that correspond to one or more objects in the physical space depicted in the two dimensional image, and wherein the one or more virtual objects have dimensions corresponding with dimensions of the one or more objects in the physical space (e.g. [0042],Fig.3: at 304, a product and/or grouping of products from the 2D image may be selected for placement in the 3D virtual environment established at 302; [0043]: criteria for identifying products matching the arrangement and/or decor style may include: 3D dimensions, color, texture, composing material, and/or function of the products, among others; [0050],Fig.3: at 318, the technique may terminate or continue as more products from the 2D image may or might not be selected for addition to the 3D virtual environment; Examiner’s note: this shows all selected objects will be placed/populated into the 3D space). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Bradley and Mullins to furnish a scene, in the same conventional manner as taught by Besecker as both deal with constructing/generating 3D scenes. The motivation to combine the two would be that it would allow the user to furnish the constructed 3D scene with objects as shown in the captured image. In regards to claim 21, the combination of Bradley and Mullins teaches the method of claim 1, but does not explicitly teach the method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: size new virtual objects relative to the virtual object and the three dimensional virtual space. However, Besecker teaches a method, comprising: sizing new virtual objects relative to the virtual object and the three dimensional virtual space (e.g. as above, [0042],Fig.3: at 304, a product and/or grouping of products from the 2D image may be selected for placement in the 3D virtual environment established at 302; [0043]: criteria for identifying products matching the arrangement and/or decor style may include: 3D dimensions, color, texture, composing material, and/or function of the products, among others; Examiner’s note: this suggests virtual objects corresponding to objects in the physical space are chosen/sized based on the actual objects). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified the teachings/combination of Bradley and Mullins to size objects, in the same conventional manner as taught by Besecker as both deal with constructing/generating 3D scenes. The motivation to combine the two would be that it would allow the user to size virtual objects based on the surrounding objects/scene. In regards to claim 22, the combination of Bradley and Mullins teaches the method of claim 1, but does not explicitly teach the method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: define a two dimensional profile of the virtual object and applying rules for offsetting style profiles from a face of the virtual object; and dynamically resize the virtual object to fit a selected size within the three dimensional virtual space. However, Besecker teaches a method, comprising: defining a two dimensional profile of the virtual object and applying rules for offsetting style profiles from a face of the virtual object (e.g. [0072]: system 100 may be configured for cabinet doors by defining a 2D profile of the door and the rules for how to offset door and drawer style profiles from the face of a cabinet); and dynamically resizing the virtual object to fit a selected size within the three dimensional virtual space (e.g. further in [0072]: system 100 may be configured to dynamically “stretch” the door parameters to fit one door to any cabinet size, instead of modeling every door shape and size). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified the teachings/combination of Bradley and Mullins to resize objects, in the same conventional manner as taught by Besecker as both deal with constructing/generating 3D scenes. The motivation to combine the two would be that it would allow the user to size virtual objects using rules provided by the profile of the object. In regards to claim 23, the combination of Bradley, Mullins and Besecker teaches a method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: render an assembly comprising a plurality of virtual objects arranged according to a defined layout, wherein the assembly includes at least one virtual object mounted on another virtual object based on one or more rules or metadata specifying fit, location, and compatibility (e.g. Besecker, [0073]: system 100 may be configured to render “assemblies,” which are objects mounted on other objects and/or arranged into some kind of layout; system 100 can be configured with the ability to mount an object on another object using one or more rules and/or metadata that may define fit, location, and/or compatibility); and enable editing of the assembly or specifying the assembly as being non-editable within the three dimensional virtual space (e.g. Besecker, further in [0073]: system 100 can be configured such that assemblies can also be editable or not editable). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified the teachings/combination of Bradley and Mullins to assemble objects, in the same conventional manner as taught by Besecker as both deal with constructing/generating 3D scenes. The motivation to combine the two would be that it would allow the user to group objects in a layout such that the objects are considered a single entity. In regards to claim 26, the combination of Bradley and Mullins teaches the method of claim 1, but does not explicitly teach the method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: enable customization of the three dimensional virtual space by allowing at least the virtual object to be removed, replaced, moved, or modified within the three dimensional virtual space. However, Besecker teaches a method, comprising: enabling customization of the three dimensional virtual space by allowing at least the virtual object to be removed, replaced, moved, or modified within the three dimensional virtual space (e.g. [0061]: composition of the 3D virtual environment and/or the addition of 3D models may be done iteratively; for example, a preliminary 3D virtual environment may be composed, and then after 3D models are added the 3D virtual environment may be modified; virtual objects may be added to or subtracted from the modified 3D virtual environment, and then may be modified again; the process can be repeated over and over). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified the teachings/combination of Stekovic to customize objects, in the same conventional manner as taught by Besecker as both deal with generating 3D room layouts. The motivation to combine the two would be that it would allow the user to not only furnish the 3D room layout with objects but also add, remove and edit the objects. Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bradley, Mullins and Besecker as applied to claims 8 above, and further in view of Stekovic et al. (US 2021/0150805 A1). In regards to claim 9, the combination of Bradley, Mullins and Besecker teaches a method of claim 8, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: customize a design of the generated three dimensional virtual space, wherein customizing a design of the three dimensional virtual space includes generating new virtual objects, customizing the new virtual objects, removing the new virtual objects, or replace the new virtual objects in the generated three dimensional virtual space (e.g. Besecker, [0061]: composition of the 3D virtual environment and/or the addition of 3D models may be done iteratively; for example, a preliminary 3D virtual environment may be composed, and then after 3D models are added the 3D virtual environment may be modified; virtual objects may be added to or subtracted from the modified 3D virtual environment, and then may be modified again; the process can be repeated over and over), but does not explicitly teach the method, comprising program instructions, when executed, cause the one or more processors to: provide a user interface to a user that is configured to enable user selection of one or more options that change depiction of the one or more virtual objects in the generated three dimensional virtual space. However, Stekovic teaches a method, comprising: providing a user interface to a user that is configured to enable user selection of one or more options that change depiction of the one or more virtual objects in the generated three dimensional virtual space (e.g. [0122]-[0123]: process 1400 includes receiving a user input to manipulate the three-dimensional model, and adjusting at least one of a pose, a location, and/or a property of the three-dimensional model in an output image based on the user input; property of the 3D model can include an appearance of the 3D model (e.g. texture, color, sheen, reflectance, among others)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified the teachings/combination of Bradley, Mullins and Besecker to change objects, in the same conventional manner as taught by Stekovic as both deal with constructing/generating 3D scenes. The motivation to combine the two would be that it would allow the user to change and modify the different characteristics of the virtual objects in the scene. In regards to claim 10, the combination of Bradley, Mullins, Besecker and Stekovic teaches a method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: size the new virtual objects relative to the one or more virtual objects already depicted in the three dimensional virtual space (e.g. Besecker as above, [0042],Fig.3: at 304, a product and/or grouping of products from the 2D image may be selected for placement in the 3D virtual environment established at 302; [0043]: criteria for identifying products matching the arrangement and/or decor style may include: 3D dimensions, color, texture, composing material, and/or function of the products, among others; Examiner’s note: this suggests virtual objects corresponding to objects in the physical space are chosen/sized based on the actual objects). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bradley and Mullins as applied to claim 1 above, and further in view of Khosravan et al. (US Pat. 11,783,385 B1). In regards to claim 11, the combination of Bradley and Mullins teaches the method of claim 1, but does not explicitly teach the method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: identify a style of the physical space based on contextual information determined from the two dimensional image of the physical space, wherein the contextual information includes dimensions of the physical space, identification of objects in the physical space, and dimensions of the objects in the physical space. However, Khosravan teaches a method, comprising: identifying a style of the physical space based on contextual information determined from the two dimensional image of the physical space, wherein the contextual information includes dimensions of the physical space, identification of objects in the physical space, and dimensions of the objects in the physical space (e.g. c.11 L.13-54: to identify the architectural style of a home, the valuation computer system 100 can implement computer vision techniques to isolate interior property features/attributes and match them against templates of interior property features/attributes annotated with architectural styles; c.16 L.66-c.17 L.6,Fig,9: at block 904, process 900 identifies interior features of the image data, using a computer vision module; the interior features can include windows, doors, kitchen appliances, cabinetry, molding, and so on; in some implementations, process 900 further determines attributes (e.g. size, style, material, type, condition) of the identified interior features; the attributes can be associated with a room type of the interior features, such as a kitchen or a bathroom). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Bradley and Mullins to identify style, in the same conventional manner as taught by Khosravan as both deal with virtual scene representations. The motivation to combine the two would be that it would allow the user to determine a style of the scene based on features of the captured scene. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bradley, Mullins and Khosravan as applied to claim 11 above, and further in view of Beauchamp et al. (US 2023/0410436 A1). In regards to claim 12, the combination of Bradley, Mullins and Khosravan teaches the method of claim 11, but does not explicitly teach the method, wherein the one or more processors are programmed with computer instructions to identify a style of the physical space based on contextual information determined from the two dimensional image of the physical space comprise further program instructions, when executed, cause the one or more processors to: identify product names of objects depicted in the physical space; and determine a design style associated with the physical space based on the identified product names of the objects depicted in the physical space. However, Beauchamp teaches a method, comprising: identifying product names of objects depicted in the physical space (e.g. [0203]: the camera of the customer device may capture an image of a bathmat and the client app may apply an object recognition function on the image of the bathmat to identify/classify the type of object in the image is a bathmat or identify the brand of the bathmat); and determining a design style associated with the physical space based on the identified product names of the objects depicted in the physical space (e.g. [0204]: by determining the objects or types of objects typically associated with the regions or types of regions, the region prediction engine may predict an appropriate region or type of region having attributes (e.g. types of objects, objects, spatial features) relevant to or routinely associated with the objects or types of objects and/or may infer the customer's desired region in which to preview the new object from ambiguous instructions; Examiner’s note: this shows that based on the identified objects, the type of region (style) may be predicted). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Stekovic and Khosravan to determine product names, in the same conventional manner as taught by Beauchamp as both deal with virtual scene representations. The motivation to combine the two would be that identification of product names/brands would help in determining a scene style. Claim(s) 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bradley and Mullins as applied to claim 1 above, and further in view of Xu et al. (“Constraint-based Automatic Placement for Scene Composition”). In regards to claim 24, the combination of Bradley and Mullins teaches the method of claim 1, but does not explicitly teach the method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: determine compatibility and fit between two or more virtual objects placed within the three dimensional virtual space; and automatically arrange the two or more virtual objects based on such compatibility and fit, such that respective virtual objects are positioned in the three dimensional virtual space according to allowable placement relationships, including supporting placement of one virtual object on or within another virtual object. However, Xu teaches a method, comprising: determining compatibility and fit between two or more virtual objects placed within the three dimensional virtual space (e.g. Section 3: CAPS can lay out large numbers of objects simultaneously; it exploits semantic information to aid in the placement; it permits objects to be placed randomly (within the limits of their placement constraints and pseudo-physical constraints); Section 3.2: set of constraints is associated with each object to define where the object may or may not be placed; Section 3.2.1: surface constraint indicates how the object is to be placed on the surface of another; proximity constraint indicates how close the object should be placed relative to another object; support constraint indicates whether the object can support others and whether it can be supported by others); and automatically arranging the two or more virtual objects based on such compatibility and fit, such that respective virtual objects are positioned in the three dimensional virtual space according to allowable placement relationships, including supporting placement of one virtual object on or within another virtual object (e.g. as above, Section 3: CAPS can lay out large numbers of objects simultaneously; it exploits semantic information to aid in the placement; it permits objects to be placed randomly (within the limits of their placement constraints and pseudo-physical constraints)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Bradley and Mullins to place virtual objects, in the same conventional manner as taught by Xu as both deal with virtual scene representations. The motivation to combine the two would be that it would allow the placement of virtual objects into the scene based on constraints. Claim(s) 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bradley and Mullins as applied to claim 1 above, and further in view of Gonzalez Delgado et al. (US 2023/0379649 A1). In regards to claim 25, the combination of Bradley and Mullins teaches the method of claim 1, but does not explicitly teach the method, wherein the one or more processors are programmed with computer instructions which, when executed, cause the one or more processors to further: enable assignment of one or more composition properties to virtual objects within the three dimensional virtual space, the composition properties including at least one of: animation of object parts to display internal components or demonstrate ranges of motion, generation of sounds associated with virtual objects, and control of lighting properties for virtual objects representing lights, lamps, or ceiling fans, the lighting properties comprising angle, spread, and intensity settings. However, Gonzalez Delgado teaches a method, comprising: enable assignment of one or more composition properties to virtual objects within the three dimensional virtual space, the composition properties including at least one of: animation of object parts to display internal components or demonstrate ranges of motion, generation of sounds associated with virtual objects, and control of lighting properties for virtual objects representing lights, lamps, or ceiling fans, the lighting properties comprising angle, spread, and intensity settings (e.g. [0087]: the user may then place the virtual object (Fig.6, 614) within the extended reality environment (Fig.6, 610); sound characteristics for the virtual object (Fig.6, 614) are assigned; for example, the sound characteristics for the virtual object (Fig.6, 614) may define how the virtual object (Fig.6, 614) is to generate sound within the extended reality environment (Fig.6, 610)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Bradley and Mullins to assign characteristics to virtual objects, in the same conventional manner as taught by Gonzalez Delgado as both deal with virtual environments. The motivation to combine the two would be that it would allow assignment of characteristics, such as sound, to virtual objects within the scene layout. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JED-JUSTIN IMPERIAL whose telephone number is (571)270-5807. The examiner can normally be reached Monday to Friday, 9am - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JED-JUSTIN IMPERIAL/Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Dec 30, 2024
Application Filed
Mar 10, 2025
Non-Final Rejection — §103
Jun 16, 2025
Response Filed
Jul 22, 2025
Final Rejection — §103
Jan 26, 2026
Request for Continued Examination
Jan 30, 2026
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602890
NEURAL VECTOR FIELDS FOR 3D SHAPE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597225
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, PROGRAM, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586332
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR MANIPULATING VIRTUAL OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12579750
RENDERING VIEWS OF A SCENE IN A GRAPHICS PROCESSING UNIT
2y 5m to grant Granted Mar 17, 2026
Patent 12541934
SYSTEM AND METHOD FOR RAPID SENSOR COMMISSIONING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
85%
With Interview (+12.1%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month